Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Worker can't find iHD_drv_video.so #223

Open
kenlasko opened this issue Jun 15, 2023 · 37 comments
Open

Worker can't find iHD_drv_video.so #223

kenlasko opened this issue Jun 15, 2023 · 37 comments
Labels
bug Something isn't working stale Issue has been inactive for more than 30 days

Comments

@kenlasko
Copy link

Describe the bug
When trying to play a transcoded video via a worker, the video fails to play. Worker logs indicate it cannot find iHD_drv_video.so. When I disable ClusterPlex and just use my "normal" PMS pod, HW transcoding works fine.

Intel GPU drivers are installed via Intel device plugins Helm chart: https://intel.github.io/helm-charts/

Same issue happens when using either standard Plex image with DOCKER_MOD or the ClusterPlex image

Relevant log file for worker:

[AVHWDeviceContext @ 0x7fa6496df6c0] libva: VA-API version 1.18.0
[AVHWDeviceContext @ 0x7fa6496df6c0] libva: Trying to open /config/Library/Application Support/Plex Media Server/Cache/va-dri-linux-x86_64/iHD_drv_video.so
[AVHWDeviceContext @ 0x7fa6496df6c0] libva: va_openDriver() returns -1
[AVHWDeviceContext @ 0x7fa6496df6c0] libva: Trying to open /config/Library/Application Support/Plex Media Server/Cache/va-dri-linux-x86_64/i965_drv_video.so
[AVHWDeviceContext @ 0x7fa6496df6c0] libva: va_openDriver() returns -1
[AVHWDeviceContext @ 0x7fa6496df6c0] Failed to initialise VAAPI connection: -1 (unknown libva error).
Device creation failed: -5.
Failed to set value 'vaapi=vaapi:/dev/dri/renderD128' for option 'init_hw_device': I/O error
Error parsing global options: I/O error
Completed transcode
Removing process from taskMap

The /config/Library/Application Support/ folder is empty, so it explains why it can't find the driver. Tried placing the driver that I pulled off the Plex server in the codecs PV, but no difference.

Environment
K3S v1.26.5+k3s1
Nodes are Beelink U59's with Intel N5105 processor

@kenlasko kenlasko added the bug Something isn't working label Jun 15, 2023
@pabloromeo
Copy link
Owner

Is that with the worker having the FFMPEG_HWACCEL environment variable set to "vaapi"?

@kenlasko
Copy link
Author

Yes, it is. Here's the relevent ConfigMap:

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: clusterplex-worker-config
  namespace: media-tools
  labels:
    app.kubernetes.io/name: clusterplex-worker-config
    app.kubernetes.io/part-of: plex
data:
  TZ: America/Toronto
  PGID: '1000'
  PUID: '1000'
  VERSION: docker
  DOCKER_MODS: 'ghcr.io/pabloromeo/clusterplex_worker_dockermod:latest'
  ORCHESTRATOR_URL: 'http://clusterplex-orchestrator:3500'
  LISTENING_PORT: '3501'
  STAT_CPU_INTERVAL: '10000'
  EAE_SUPPORT: '1'
  FFMPEG_HWACCEL: 'vaapi'

@github-actions
Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale Issue has been inactive for more than 30 days label Jul 22, 2023
@todaywasawesome
Copy link

I'm having the same issue.

Logging into the container, it looks like Plex isn't "fully-installed" there should be a cache with the extensions in those folders. See this reddit discussion, as it's the same error. https://www.reddit.com/r/PleX/comments/12ikwup/plex_docker_hardware_transcoding_issue/

@todaywasawesome
Copy link

What's odd to me is that local transcode works, its only on the remote workers that they fail.

@github-actions github-actions bot removed the stale Issue has been inactive for more than 30 days label Jul 25, 2023
@todaywasawesome
Copy link

@kenlasko @pabloromeo Ok, I got it working. The clue was the fact that Plex didn't have it's config directory setup in the worker nodes. Plex needs it's configuration otherwise it's going to fail because Plex basically isn't setup. Here's how I fixed it:

  1. Change clusterplex-config-pvc PVC to ReadWriteMany
  2. Add the config mount to the clusterplex-worker statefulset just like you've already done with the pms deployment.

Here's what my two files look like, though yours will look different depending on storage.

Clusterplex-worker

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: clusterplex-worker
  labels:
    app.kubernetes.io/name: clusterplex-worker
    app.kubernetes.io/part-of: clusterplex
spec:
  serviceName: clusterplex-worker-service
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: clusterplex-worker
      app.kubernetes.io/part-of: clusterplex
  template:
    metadata:
      labels:
        app.kubernetes.io/name: clusterplex-worker
        app.kubernetes.io/part-of: clusterplex
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  name: clusterplex-worker
              topologyKey: kubernetes.io/hostname
            weight: 100
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  name: clusterplex-pms
              topologyKey: kubernetes.io/hostname
            weight: 50
      containers:
      - name: plex-worker
        image: lscr.io/linuxserver/plex:latest
        startupProbe:
          httpGet:
            path: /health
            port: 3501
          failureThreshold: 40
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health
            port: 3501
          initialDelaySeconds: 60
          timeoutSeconds: 5
        livenessProbe:
          httpGet:
            path: /health
            port: 3501
          initialDelaySeconds: 10
          timeoutSeconds: 10
        ports:
          - name: worker
            containerPort: 3501
        envFrom:
        - configMapRef:
            name: clusterplex-worker-config
        volumeMounts:
        - name: data
          mountPath: /data
        - name: codecs
          mountPath: /codecs
        - name: data
          mountPath: /transcode
        - name: config
          mountPath: /config
        resources:              # adapt requests and limits to your needs
          requests:
            cpu: 500m
            memory: 200Mi
          limits:
            gpu.intel.com/i915: 1
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: "plex-media"
      - name: config
        persistentVolumeClaim:
          claimName: "clusterplex-config-pvc"
      # - name: transcode
      #   persistentVolumeClaim:
      #     claimName: "plex-media"
  volumeClaimTemplates:
    - metadata:
        name: codecs
        labels:
          app.kubernetes.io/name: clusterplex-codecs-pvc
          app.kubernetes.io/part-of: clusterplex
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 1Gi
        # specify your storage class
        storageClassName: longhorn

clusterplex-config-pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: clusterplex-config-pvc
  labels:
    app.kubernetes.io/name: clusterplex-config-pvc
    app.kubernetes.io/part-of: clusterplex
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: "10Gi"
  # specify your storage class
  storageClassName: longhorn

@pabloromeo
Copy link
Owner

I see! Yeah, the fact that Plex is not set up in the Workers is actually intentional. It shouldn't really be necessary, since the intention is to really only use the Plex transcoder (their fork from FFmpeg), without actually interacting with the local plex files. We use their base image to avoid redistributing their own transcoder ourselves, but plex doesn't really run on the worker.
It's odd that it wants to use drivers within Plex's cache instead of the ones you installed on the node.

The reason we don't recommend sharing Plex's config in that way, using shares, is because Plex uses SQLLite as a database, which does not play well with network shares. And Longhorn's RWX is implemented with NFS behind the scenes. So you might end up corrupting the database or seeing odd issues.
Maybe you can mount JUST the cache location, to avoid any db corruption. meaning, just sharing /config/Library/Application Support/Plex Media Server/Cache/ or /config/Library/Application Support/Plex Media Server/Cache/va-dri-linux-x86_64/

I'll see if I can set up a physical environment similar to yours, to see if there's a way around that. Maybe driver paths must be rewritten or something like that. I know others are running it with intel drivers on k8s, but I'm not aware if they had to do this same workaround or not.

@todaywasawesome
Copy link

@pabloromeo excellent, I've been thinking about potential issues with my setup and what you've said makes sense. I'll try to see if I can do just the cache.

@todaywasawesome
Copy link

todaywasawesome commented Jul 26, 2023

I mounted Plex config in a different directory, then exec'd into the container and copied just the cache. No go, it throws errors.

[AVHWDeviceContext @ 0x7fdfdb7b2980] libva: Trying to open /config/Library/Application Support/Plex Media Server/Cache/va-dri-linux-x86_64/iHD_drv_video.so
[AVHWDeviceContext @ 0x7fdfdb7b2980] libva: va_openDriver() returns -1
[AVHWDeviceContext @ 0x7fdfdb7b2980] libva: Trying to open /config/Library/Application Support/Plex Media Server/Cache/va-dri-linux-x86_64/i965_drv_video.so
[AVHWDeviceContext @ 0x7fdfdb7b2980] libva: va_openDriver() returns -1
[AVHWDeviceContext @ 0x7fdfdb7b2980] Failed to initialise VAAPI connection: -1 (unknown libva error).
Device creation failed: -5.
Failed to set value 'vaapi=vaapi:/dev/dri/renderD128' for option 'init_hw_device': I/O error
Error parsing global options: I/O error
Completed transcode
Removing process from taskMap

After that, I copied everything from the temp folder and hardware transcoding works fine.

We might actually be running into something to do with Plex having to be on premium and have a claim token to run hw transcoding.

Another formulation I tried, adding the plex config as readonly, unfortunately the workers can't start because they can't run the fix permissions scripts that happen on start.

@github-actions
Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale Issue has been inactive for more than 30 days label Aug 27, 2023
@pabloromeo pabloromeo removed the stale Issue has been inactive for more than 30 days label Aug 28, 2023
@seang96
Copy link
Contributor

seang96 commented Sep 10, 2023

I am doing a helm chart deployment and ran into the issue. I already had to customize the charts to use env in the config for HW transcoding variable for workers, so I customized it to include the config and it no longer errors too. Not too knowledgeable on editing helm charts nor Plex but what if we make the directory or files with the sqlite DBS to be mounted read only?

@audiophonicz
Copy link

audiophonicz commented Sep 11, 2023

Hello, I just started using this and came across this issue while verifying settings for HW Transcode on my NUC cluster.

Thanks for finding this issue before I experienced it :)

@todaywasawesome , I noticed the iHD_drv_video.so you referenced wasnt actually in the Plex Media Server/Cache, but linked to it from Plex Media Server/Drivers/imd-74-linux-x86_64/dri/iHD_drv_video.so'.

To get around the issue with both sharing the Cache and Drivers folders with the workers, as ReadOnly, but excluding other config so as not to disturb the DB, I have:

  • Left the existing Config PVC as ReadWriteOnce and NOT mounted it to the Worker
  • Created additional tiny PVCs for Cache and Drivers, mounted on PMS and Worker containers in appropriate locations, Worker nodes ReadOnly. 1Gi is overkill but I did 5Gi just in case.

Additional Cache and Driver PVC


---
#cluster-plex_cache-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: clusterplex-cache-pvc
  namespace: plex-ns
  labels:
    app.kubernetes.io/name: clusterplex-cache-pvc
    app.kubernetes.io/part-of: clusterplex
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: longhorn
---
#cluster-plex_drivers-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: clusterplex-drivers-pvc
  namespace: plex-ns
  labels:
    app.kubernetes.io/name: clusterplex-drivers-pvc
    app.kubernetes.io/part-of: clusterplex
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: longhorn

Worker: (PMS is the same excluding the readOnly; true on the spec.volumes)

   containers:
      - name: plex-worker
        image: lscr.io/linuxserver/plex:latest
        startupProbe:
          httpGet:
            path: /health
            port: 3501
          failureThreshold: 40
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health
            port: 3501
          initialDelaySeconds: 60
          timeoutSeconds: 5
        livenessProbe:
          httpGet:
            path: /health
            port: 3501
          initialDelaySeconds: 10
          timeoutSeconds: 10
        ports:
          - name: worker
            containerPort: 3501
        envFrom:
        - configMapRef:
            name: clusterplex-worker-config
        volumeMounts:
        - name: media
          mountPath: /mnt/media
        - name: codecs
          mountPath: /codecs
        - name: transcode
          mountPath: /transcode
        - name: cache
          mountPath: /config/Library/Application Support/Plex Media Server/Cache
        - name: driver
          mountPath: /config/Library/Application Support/Plex Media Server/Drivers
        resources:              # adapt requests and limits to your needs
          requests:
            cpu: 500m
            memory: 200Mi
            gpu.intel.com/i915: "1" 
          limits:
            cpu: 2000m
            memory: 2Gi
            gpu.intel.com/i915: "1" 
      volumes:
      - name: media
        nfs:
          path: /mediastuff
          server: myserver.example.local
      - name: transcode
        persistentVolumeClaim:
          claimName: "clusterplex-transcode-pvc"
      - name: codecs
        persistentVolumeClaim:
          claimName: "clusterplex-codec-pvc"
      - name: cache
        persistentVolumeClaim:
          claimName: "clusterplex-cache-pvc"
          readOnly: true
      - name: drivers
        persistentVolumeClaim:
          claimName: "clusterplex-drivers-pvc"
          readOnly: true

Folders mounted inside Worker. Touch test for RO verify.

root@clusterplex-worker-0:/# ls -al /config/Library/Application\ Support/Plex\ Media\ Server/
total 10
drwxr-xr-x 4 abc abc 4096 Sep 11 13:43 .
drwxr-xr-x 3 abc abc 4096 Sep 11 13:43 ..
drwxrwxrwx 8 abc abc 1024 Sep 11 13:54 Cache
drwxrwxrwx 3 abc abc 1024 Sep 11 13:43 Driver
root@clusterplex-worker-0:/# touch /config/Library/Application\ Support/Plex\ Media\ Server/Cache/test
touch: cannot touch '/config/Library/Application Support/Plex Media Server/Cache/test': Read-only file system

Remote VAAPI Transcode Success:

JobPoster connected, announcing
Orchestrator requesting pending work
Sending request to orchestrator on: http://clusterplex-orchestrator:3500
Remote Transcoding was successful
Calling external transcoder: /app/transcoder.js
ON_DEATH: debug mode enabled for pid [1977]
Local Relay enabled, traffic proxied through PMS local port 32499
Setting VERBOSE to ON
Sending request to orchestrator on: http://clusterplex-orchestrator:3500
cwd => "/transcode/Transcode/Sessions/plex-transcode-ba2f8489-11e0-4fab-b08d-31f4b42686ae-6c51bcab-01cf-4780-b61e-b99f21fb343a"
args => 

....BLAHBLAHBLAHBLAH...

"LIBVA_DRIVERS_PATH":"/config/Library/Application Support/Plex Media Server/Cache/va-dri-linux-x86_64"

...BLAHBLAHBLAHBLAH... 

FFMPEG_HWACCEL":"vaapi"

...BLAHBLAHBLAH...

"FFMPEG_EXTERNAL_LIBS":"/config/Library/Application\\ Support/Plex\\ Media\\ Server/Codec**s/8217c1c-4578-linux-x86_64/","TRANSCODER_VERBOSE":"1"}

Hope this helps

@pabloromeo
Copy link
Owner

@audiophonicz that's an extremely clever approach, love it! :)

Now, I've finally set up a similar environment to test this, and have also been seeing the same issue, as well as trying to identify a few workarounds.
Because I believe there may be an issue with this approach of depending on data from the main PMS. I believe this would only work if where you run PMS also has the same hardware, meaning, an intel iGPU as well. It seems that Plex creates the content of it's Drivers directory during initialization, based on the hardware available.

If that's the case, then there may be one other alternative approach, that doesn't depend on sharing Drivers and the Cache between PMS and the workers. And it's to initialize PMS on the workers at startup and then kill it once the local config has been created (I believe the linuxserver image does something along those lines too) so that the drivers for its hardware are downloaded.
I've manually tried it and it would appear to work, however, we have to be careful with doing that, as we can only do that if the config is NOT being shared with the main PMS. I'm guessing it could probably destroy or corrupt that real config, so this only applies to a standalone worker that is not sharing configs as shown above.

If this actually works I may add an additional optional parameter to force a PMS initialization on the Workers, but the default will be to not do it, to avoid breaking working installations as the ones mentioned above here.

Now, question to you @audiophonicz and @todaywasawesome, when HW Transcoding on the worker with your working setups, does Plex show that it's being transcoded by HW or is it obvlivious to it? In my initial test it's just saying "Transcode" not "Transcode (hw)".

@todaywasawesome
Copy link

@pabloromeo It's been transcode (hw) for me. Making sure to mount the needed hardware of course.

I do have a concern that it might be limiting based on license. HW is a pro feature, so if Plex doesn't initialize as pro, it wouldn't enable HW transcoding. Might be able to use a claim key.

@audiophonicz
Copy link

So, weird Update: my method works but ONLY if the worker container is on the same physical node as the pms container. Theres no difference in the logs until it actually connects and starts to stream, then the remote workers simply kill the child process. I can even see the tile flash up in the pms Dashboard for half a second, then it disappears and tries another worker. when it finally gets to the worker on the same physical node, the logs pick up from the "segment:chunk-00000" and start playing.

[tcp @ 0x7ff2039fd440] Successfully connected to 10.10.2.20 port 32499
[AVIOContext @ 0x7ff203887cc0] Statistics: 57 bytes written, 0 seeks, 1 writeouts
[segment @ 0x7ff20d4356c0] segment:'chunk-00000' starts with packet stream:0 pts:274024 pts_time:274.024 frame:0
Killing child processes for task 35326182-1edb-49d9-86a4-9079d2e90e3d
Removing process from taskMap

@todaywasawesome can you confirm you can transcode on a worker container on a different physical node than pms when sharing the entire config? I'm thinking youre right about the PlexPass thing, and mine is matching the IP or something and only allowing it on the same node.

@pabloromeo
Yes, i have 6 identical nodes so I was counting on pms downloading the driver for my workers. Your approach of quick-init might be a better direction, but if server config and the existence of PlexPass is indeed interfering with HW transcoding on the remote workers, then a driver download alone might not work

Also, while transcoding on Worker-1
image

@pabloromeo
Copy link
Owner

pabloromeo commented Sep 14, 2023

Can you check the logs on the workers? That might shed some light on what's going on.

Regarding plex Pass it's hard to say how they validate it.
The X_PLEX_TOKEN should be reaching the worker and I believe it gets validated by a callback to PMS (through the relay).
Unless something within that flow is broken. But without errors in logs it's quite difficult to identify. Maybe enabling debug logging in plex itself and seeing the messages in it's UI console.

@todaywasawesome
Copy link

I'll share my logs soon. My cluster is down for ISP issues ATM.

@audiophonicz
Copy link

TL;DR;
I got remote HW transcoding working pretty reliably by flipping my original workaround and providing the workers the entire /config PVC without readOnly (so far) but sub-mounting the /Plugin Support/ dir (with the databases and whatnot) to only the PMS container as a separate PVC ReadWriteOnce.
One thing I still have to work on here is the pid file overwrite.

Long:
Ok so some weird stuff happened after my last post, 60 seconds after I comment one of the workers (on another node) got stuck and was the only worker being used, but HW transcodes not only worked, they were damn near instant. Unforunately after restarting that pod all that went away.. but it led me to my other issue I opened about transcode processes not stopping.

Anyways, I made some progress on my remote HW transcodes. Providing just the drivers for HW transcode doesnt seem to be enough, as it would only work on the same machine as my pms pod. Seeing that it seemed to work for todaywasawesome by sharing the whole config dir, which happens to contain a token file and the preferences.xml with the machineid UUID, i tried his method, and was riddled with SQLite db slow; waiting or some such logs. So, I flipped my original method and created a single additional PVC just for the databases in the /Plugin Support/ folder to essentially carve them out of the main /config folder and it seems to have worked.

I am currently playing 7 plays simultaneously across 3 workers:
3x direct play HEVC10
3x HEVC10 SW decode > H264 HW encode
1x HEVC8 HW decode > H264 HW encode

I apparently have a bunch of devices that cant HW decode HEVC10 and it really pushes my little i3-6100U nodes, so they take a good 30-45 seconds to start playing, but it does work. Every now and then one play will freeze or fail and need to retry (pretty sure its HEVC10 playing havok), but for the most part auto-play next and seeks are working as well. 99% of my stuff is H264, I only found 1 with HEVC8 and HEVC10 so I should be good with this setup.

I do still want to try to separate out the pid file so the worker isnt constantly deleting and overwriting each others pid file. It doesnt seem to hurt right now but its not optimal.

@todaywasawesome
Copy link

The workaround I tried is copying the folder over manually from a temp config directory to the config directory. That way the worker can do whatever it wants with the local db, it's trashed anyway.

Not great still.

@audiophonicz
Copy link

Ok guys I need some insight here. I still for the life of me cant get a worker to play if its not on the same node as PMS. its driving me mad.

Weird thing is, if both PMS and Worker-0 are on NODE1, Direct Plays will Direct Play, and Transcodes will Transcode, HW or SW, life is good.

If i simply move PMS to NODE2 while Worker-0 is on NODE1, all plays break. Direct Plays try to Transcode, and all Transcodes fail. Its not the /config dir. its not the /transcode or /codecs RWX speeds. Its purely on the same host or not, and I cant figure out what its using.

My only idea left is that the transcode job is using https://127.0.0.1 for the video transcode sessions and its not translating across pods/nodes:

[Req#745a/Transcode/JobRunner] Job running: FFMPEG_EXTERNAL_LIBS='/config/Library/Application\ Support/Plex\ Media\ Server/Codecs/8217c1c-4578-linux-x86_64/' X_PLEX_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx "/usr/lib/plexmediaserver/Plex Transcoder" -codec:0 mp3 -analyzeduration 20000000 -probesize 20000000 -i "/config/Library/Application Support/Plex Media Server/Metadata/TV Shows/3/d5dad9b0d635ffd439712c5dfd135b86a523101.bundle/Contents/_combined/themes/tv.plex.agents.series_e6fccc112eb130590ea2d245d869fedce8d276e9" -filter_complex "[0:0] aresample=async=1:ochl='stereo':rematrix_maxval=0.000000dB:osr=48000:rematrix_volume=-25.000000dB[0]" -map "[0]" -codec:0 libmp3lame -q:0 0 -f segment -segment_format mp3 -segment_time 1 -segment_header_filename header -segment_start_number 0 -segment_list "http://127.0.0.1:32400/video/:/transcode/session/3718081b-027b-4a0f-b1a1-fb99766945bf-64/cce15ce6-6af7-46fd-abbf-dd42e5e4609b/manifest?X-Plex-Http-Pipeline=infinite" -segment_list_type csv -segment_list_unfinished 1 -segment_list_size 5 -segment_list_separate_stream_times 1 -map_metadata -1 -map_chapters -1 "chunk-%05d" -y -nostats -loglevel quiet -loglevel_plex error -progressurl http://127.0.0.1:32400/video/:/transcode/session/3718081b-027b-4a0f-b1a1-fb99766945bf-64/cce15ce6-6af7-46fd-abbf-dd42e5e4609b/progress

@rekh127
Copy link

rekh127 commented Sep 28, 2023

My only idea left is that the transcode job is using https://127.0.0.1 for the video transcode sessions and its not translating across pods/nodes:

Plex definitely uses a loopback network for transcodes. On my freebsd plex jail if I don't give it a loopback address direct plays are fine but transcodes fail. (regardless of whether it needs to transcode audio or video). The address I give it is not 127.0.0.1 but it finds it okay.

If direct plays aren't working for you I'm not sure if this is the same problem but it very well might be. Also maybe the direct play you tested was transcoding audio?

@rekh127
Copy link

rekh127 commented Sep 28, 2023

Ok guys I need some insight here. I still for the life of me cant get a worker to play if its not on the same node as PMS. its driving me mad.

Honestly I think this is probably a different issue and perhaps plex network configuration? - this issue is just about hardware transcoding failing, if you're not getting workers to transcode at all thats a more root problem

@audiophonicz
Copy link

Plex definitely uses a loopback network for transcodes. On my freebsd plex jail if I don't give it a loopback address direct plays are fine but transcodes fail. (regardless of whether it needs to transcode audio or video). The address I give it is not 127.0.0.1 but it finds it okay.

Thank you for your reply but my question is specifically about HW transcoding across physically separate kubernetes nodes, and Im not sure how a freebsd jail pertains. I do not see anywhere in this chart for transcode network settings, so I am not sure what this "it" you are giving a loopback address is.

Still looking for someone who has HW transcoding working across two physically separate nodes and what your plex network settings are for subnets and URL.

@rekh127
Copy link

rekh127 commented Sep 28, 2023

Sorry for the confusion the long and short of it is yes, thats where plex communicates with the transcoder. The transcoder stub here remaps that to a different container, and the nginx proxy passes it back in.

If direct play, and sw transcoding also are failing your issue isn't really about HW transcoding.. it's something else you have broken in the orchestration of the transcoder requests.

@flopon
Copy link

flopon commented Oct 18, 2023

Same issue here (Dockermod on unprivileged LXC on Proxmox).

Mounting /config/Library/Application Support/Plex Media Server/Cache and /config/Library/Application Support/Plex Media Server/Drivers inside the workers did the trick.

Thanks !

@cpfarhood
Copy link

Remapping just drivers and cache as RWX across pms and the workers fixed this issue for me.

Copy link

github-actions bot commented Dec 9, 2023

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale Issue has been inactive for more than 30 days label Dec 9, 2023
@albeltra
Copy link

albeltra commented Dec 14, 2023

Here to report a different setup that suffers from the same issue:

NAS Host with transcode and media shares exposed over NFS

Separate host running a docker-compose stack of one PMS instance, one worker, and one orchestrator. (no swarm).

Transcode and Media directories mounted over NFS as instructed (Read and Write).

Worker HW transcode fails (intel igpu), while "local" HW transcode succeeds (same physical intel igpu)

@github-actions github-actions bot removed the stale Issue has been inactive for more than 30 days label Dec 15, 2023
Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale Issue has been inactive for more than 30 days label Jan 14, 2024
Copy link

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 29, 2024
@pabloromeo pabloromeo reopened this Jan 29, 2024
@pabloromeo
Copy link
Owner

I'm leaving this issue Open, as I'm hoping I'll eventually have time to try to make this easier to set up. I get the feeling something that might help is to have the Plex install that is within the Workers initialize during startup, just so that it download the appropriate drivers for that particular instance.
I'm mostly going on assumptions here as Plex has no public info on this, but I'm assuming the sharing of drivers directories would only work if PMS and the Work have the same type of hardware requiring the same drivers. But I could be wrong, maybe Plex downloads ALL drivers available during startup.
Unfortunately I don't have hardware available with different types in order to see what Plex does in those cases.

Has anybody gotten HW transcoding to work simultaneously on for example and NVIDIA GPU on one worker while having an Intel iGPU on another?

@github-actions github-actions bot removed the stale Issue has been inactive for more than 30 days label Jan 30, 2024
@meestark
Copy link

meestark commented Feb 4, 2024

Here to report a different setup that suffers from the same issue:

NAS Host with transcode and media shares exposed over NFS

Separate host running a docker-compose stack of one PMS instance, one worker, and one orchestrator. (no swarm).

Transcode and Media directories mounted over NFS as instructed (Read and Write).

Worker HW transcode fails (intel igpu), while "local" HW transcode succeeds (same physical intel igpu)

Similar situation here:

PMS (with qsv hardware transcoding) & Orchestrator on one host
Multiple workers living on multiple other hosts (one configured with FFMPEG_HWACCEL, one without)
HW trancode works on PMS but not on worker intel iGPU nor worker Intel SW (direct stream works fine)

What I noticed was that even though I set FFMPEG_HWACCEL to false, in the worker logs I still see this coming as the request:

12:25:09 PM
[AVHWDeviceContext @ 0x7f2b4c2398c0] libva: VA-API version 1.18.0
02/04/2024
12:25:09 PM
[AVHWDeviceContext @ 0x7f2b4c2398c0] libva: User requested driver 'iHD'
02/04/2024
12:25:09 PM
[AVHWDeviceContext @ 0x7f2b4c2398c0] libva: Trying to open /config/Library/Application Support/Plex Media Server/Cache/va-dri-linux-x86_64/iHD_drv_video.so
02/04/2024
12:25:09 PM
[AVHWDeviceContext @ 0x7f2b4c2398c0] libva: va_openDriver() returns -1
02/04/2024
12:25:09 PM
[AVHWDeviceContext @ 0x7f2b4c2398c0] Failed to initialise VAAPI connection: -1 (unknown libva error).
02/04/2024
12:25:09 PM
Device creation failed: -5.

With disabling FFMPEG_HWACCEL, my expectation was this going to get the worker to do a CPU transcode. Is there something wrong in my config? On the PMS initially, it is expecting to use the iGPU (and would normally see transcode (hw)) - is the fact that it started PMS is configured for hw transcode mean that when the job is passed to the orchestrator, it can't go to a software only worker?

edit: nevermind, everything is fine after I did what the two posters above did: shared and mounted the Cache and Drivers folders to the workers as well

@meestark
Copy link

I'm leaving this issue Open, as I'm hoping I'll eventually have time to try to make this easier to set up. I get the feeling something that might help is to have the Plex install that is within the Workers initialize during startup, just so that it download the appropriate drivers for that particular instance. I'm mostly going on assumptions here as Plex has no public info on this, but I'm assuming the sharing of drivers directories would only work if PMS and the Work have the same type of hardware requiring the same drivers. But I could be wrong, maybe Plex downloads ALL drivers available during startup. Unfortunately I don't have hardware available with different types in order to see what Plex does in those cases.

Has anybody gotten HW transcoding to work simultaneously on for example and NVIDIA GPU on one worker while having an Intel iGPU on another?

I have been unable so far to get this working in a mixed environment:

Server 1: Running Plex + Orchestrator dockers. Has intel iGPU capability
Node 1: Debian, iGPU. Clusterplex_worker as docker with 'vaapi' set in env
Node 2: Debian, iGPU. Clusterplex_worker as docker with 'vaapi' set in env
Node 3: Ubuntu, Nvidia P2000. Clusterplex_worker as docker with 'cuda' set in env

Nodes 1 & 2 work without issue. Node 3 looks like its working fine based on Plex / Orchestrator / Worker logs, but on the Client side, no video is delivered (and evenutally I have to stop the video). The end of the log below happens before I even hit 'stop' on the client. Here are the relevant excerpts from the clusterplex_worker log:

2024-02-11T17:31:21.174428683Z Received task request
2024-02-11T17:31:21.174473287Z Setting hwaccel to cuda
2024-02-11T17:31:21.174480501Z EAE_ROOT => "undefined"
2024-02-11T17:31:21.180192578Z [tcp @ 0x7f8080466640] Starting connection attempt to 10.147.52.128 port 32400
2024-02-11T17:31:21.180306963Z [tcp @ 0x7f8080466640] Successfully connected to 10.147.52.128 port 32400
2024-02-11T17:31:21.180906827Z ffmpeg version 90a317c-4653 Copyright (c) 2000-2022 the FFmpeg developers
2024-02-11T17:31:21.181374424Z   built with Plex clang version 11.0.1 (https://plex.tv 9b997da8e5b47bdb4a9425b3a3b290be393b4b1f)
2024-02-11T17:31:21.181930927Z   configuration: --disable-static --enable-shared --disable-libx264 --disable-hwaccels --disable-protocol=concat --external-decoder=h264 --enable-debug --enable-muxers --enable-libxml2 --fatal-warnings --disable-gmp --disable-avdevice --disable-bzlib --disable-sdl2 --disable-decoders --disable-devices --disable-encoders --disable-ffprobe --disable-ffplay --disable-doc --disable-iconv --disable-lzma --disable-schannel --disable-linux-perf --disable-mediacodec --enable-eae --disable-protocol='udp,udplite' --arch=x86_64 --target-os=linux --strip=true --cc=x86_64-linux-musl-clang --pkg-config=/data/jenkins/conan_build/1113394794/plexconantool/plex-pkg-config --pkg-config-flags=--static --windres=llvm-windres --enable-cuda-llvm --enable-libdrm --enable-opencl --enable-cross-compile --ar=llvm-ar --nm=llvm-nm --ranlib=llvm-ranlib --extra-ldflags='-Wl,-rpath,/data/jenkins/conan_build/1113394794/conan/.conan/data/libpciaccess/0.17-2/plex/stable/package/7763a87432c78a82fd36373080b064286892cea3/lib -Wl,-rpath,/data/jenkins/conan_build/1113394794/conan/.conan/data/libdrm/2.4.115-3/plex/stable/package/73ee780ba6ea3ef381da6e7f229c475bfaf477ca/lib -Wl,-rpath,/data/jenkins/conan_build/1113394794/conan/.conan/data/intel-gmmlib/22.3.5-2/plex/stable/package/d7d5d1f35ff92a8c39da6b47605055e839a42a9c/lib -Wl,-rpath,/data/jenkins/conan_build/1113394794/conan/.conan/data/libva/2.18.0-3/plex/stable/package/f0f4893209b867ce448a96e25ef4d6b158311557/lib -Wl,-rpath,/data/jenkins/conan_build/1113394794/conan/.conan/data/iconv/1.16-33/plex/stable/package/da4999666f4b1709dd93ae40fffdb2c6f130b23f/lib -Wl,-rpath,/data/jenkins/conan_build/1113394794/conan/.conan/data/openssl/3.1.1-2cf4e90-1/plex/stable/package/121b5d655884b039b2c06c747f3d73ef7b698b66/lib -Wl,-rpath,/data/jenkins/conan_build/1113394794/conan/.conan/data/libpciaccess/0.17-2/plex/stable/package/7763a87432c78a82fd36373080b064286892cea3/lib -Wl,-rpath,/data/jenkins/conan_build/1113394794/conan/.conan/data/libdrm/2.4.115-3/plex/stable/package/73ee780ba6ea3ef381da6e7f229c475bfaf477ca/lib -Wl,-rpath,/data/jenkins/conan_build/1113394794/conan/.conan/data/intel-gmmlib/22.3.5-2/plex/stable/package/d7d5d1f35ff92a8c39da6b47605055e839a42a9c/lib -Wl,-rpath,/data/jenkins/conan_build/1113394794/conan/.conan/data/libva/2.18.0-3/plex/stable/package/f0f4893209b867ce448a96e25ef4d6b158311557/lib -Wl,-rpath,/data/jenkins/conan_build/1113394794/conan/.conan/data/iconv/1.16-33/plex/stable/package/da4999666f4b1709dd93ae40fffdb2c6f130b23f/lib -Wl,-rpath,/data/jenkins/conan_build/1113394794/conan/.conan/data/openssl/3.1.1-2cf4e90-1/plex/stable/package/121b5d655884b039b2c06c747f3d73ef7b698b66/lib -m64 -L/data/jenkins/conan_build/1113394794/conan/.conan/data/opus/1.2.1-35/plex/stable/package/64edc78a49b81c2615dad7b22a9ac90cc029860a/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/libvorbis/1.3.5-39/plex/stable/package/76eba14299c6c14bf4759b1da21aec07c9ca1a2f/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/dav1d/1.0.0-15/plex/stable/package/4d954bcc6be6a68b775ef1b1bae9dd65e4e237ff/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/ffnvcodec/11.0.10.3-a62a66f-2/plex/stable/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/openssl/3.1.1-2cf4e90-1/plex/stable/package/121b5d655884b039b2c06c747f3d73ef7b698b66/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/x264/161-1086f45-29/plex/stable/package/64edc78a49b81c2615dad7b22a9ac90cc029860a/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/zvbi/0.2.35-61/plex/stable/package/7366a567f554439fb9e7a3415c9d1c2ea2b75360/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/libass/0.16.0-6/plex/stable/package/d81dbed7e8ad560c9ec55240308ceb55b203927d/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/mp3lame/3.98.4-34/plex/stable/package/64edc78a49b81c2615dad7b22a9ac90cc029860a/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/intel-media-driver/23.1.6-4/plex/stable/package/3875a0d8ecb43ec019597d9b6eb2624caf21f56d/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/intel-vaapi-driver/2.4.1-30/plex/stable/package/40e844589b988e89914b05176cd5aef02a9ec632/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/opencl-icd-loader/v2022.01.04-169f05d-3/plex/stable/package/f8d8beee6cd001f4d94a9808a356a0bc782f24db/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/libogg/1.3.2-35/plex/stable/package/64edc78a49b81c2615dad7b22a9ac90cc029860a/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/iconv/1.16-33/plex/stable/package/da4999666f4b1709dd93ae40fffdb2c6f130b23f/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/fribidi/1.0.12-3/plex/stable/package/464531ac2a3f2ab2167bd10d1214603bc8116983/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/harfbuzz/4.2.1-5/plex/stable/package/53415d552ac96104f622ffa8d8530937a40b4271/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/libva/2.18.0-3/plex/stable/package/f0f4893209b867ce448a96e25ef4d6b158311557/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/intel-gmmlib/22.3.5-2/plex/stable/package/d7d5d1f35ff92a8c39da6b47605055e839a42a9c/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/fontconfig/2.14.0-5/plex/stable/package/aacc2a7710dfa87ed80d4eea45b80c93243fe456/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/libdrm/2.4.115-3/plex/stable/package/73ee780ba6ea3ef381da6e7f229c475bfaf477ca/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/libxml2/2.9.11-e1bcffea-14/plex/stable/package/33406d37abb556848190dcd6097a9849aa894baf/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/freetype2/2.12.1-27/plex/stable/package/82a00e1e4cc2e8878bb79ae9b5e2235fd8280e6a/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/expat/2.2.5-36/plex/stable/package/64edc78a49b81c2615dad7b22a9ac90cc029860a/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/libuuid/1.0.3-29/plex/stable/package/841d526523d3550ac4d52807df94cbbedce37e2c/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/libpciaccess/0.17-2/plex/stable/package/7763a87432c78a82fd36373080b064286892cea3/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/libpthread-stubs/0.4-36/plex/stable/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/bzip2/1.0.6-39/plex/stable/package/618bb3c469051b52e1349cf1a297263df374d15a/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/libpng/1.6.37-42/plex/stable/package/33406d37abb556848190dcd6097a9849aa894baf/lib -L/data/jenkins/conan_build/1113394794/conan/.conan/data/zlib/1.2.11-33/plex/stable/package/64edc78a49b81c2615dad7b22a9ac90cc029860a/lib -Wl,--gc-sections -Wl,-z,noexecstack -Wl,-z,relro -g2 -gdwarf-4 -Wl,--build-id=sha1 -flto=thin -fwhole-program-vtables -Wl,--icf=all -Wl,--threads=6 -Wl,-O2 -l:libgcompat.so.0 -Wl,-rpath,'\''XORIGIN/../lib'\'' -Wl,-rpath,'\''XORIGIN/lib'\'' -Wl,--thinlto-cache-dir=/data/jenkins/conan_build/1113394794/conan/.conan/data/ffmpeg/2.0-90a317cbea6-0/plex/stable/build/35580104f2e7e21894e079d82636f00077f193b2/lto_cache/' --extra-libs= --enable-decoder=png --enable-decoder=apng --enable-decoder=bmp --enable-decoder=mjpeg --enable-decoder=thp --enable-decoder=gif --enable-decoder=dirac --enable-decoder=ffv1 --enable-decoder=ffvhuff --enable-decoder=huffyuv --enable-decoder=libdav1d --enable-decoder=av1 --enable-decoder=rawvideo --enable-decoder=zero12v --enable-decoder=ayuv --enable-decoder=r210 --enable-decoder=v210 --enable-decoder=v210x --enable-decoder=v308 --enable-decoder=v408 --enable-decoder=v410 --enable-decoder=y41p --enable-decoder=yuv4 --enable-decoder=ansi --enable-decoder=alac --enable-decoder=flac --enable-decoder=vorbis --enable-decoder=opus --enable-decoder=pcm_f32be --enable-decoder=pcm_f32le --enable-decoder=pcm_f64be --enable-decoder=pcm_f64le --enable-decoder=pcm_lxf --enable-decoder=pcm_s16be --enable-decoder=pcm_s16be_planar --enable-decoder=pcm_s16le --enable-decoder=pcm_s16le_planar --enable-decoder=pcm_s24be --enable-decoder=pcm_s24le --enable-decoder=pcm_s24le_planar --enable-decoder=pcm_s32be --enable-decoder=pcm_s32le --enable-decoder=pcm_s32le_planar --enable-decoder=pcm_s8 --enable-decoder=pcm_s8_planar --enable-decoder=pcm_u16be --enable-decoder=pcm_u16le --enable-decoder=pcm_u24be --enable-decoder=pcm_u24le --enable-decoder=pcm_u32be --enable-decoder=pcm_u32le --enable-decoder=pcm_u8 --enable-decoder=pcm_alaw --enable-decoder=pcm_mulaw --enable-decoder=ass --enable-decoder=dvbsub --enable-decoder=dvdsub --enable-decoder=ccaption --enable-decoder=pgssub --enable-decoder=jacosub --enable-decoder=microdvd --enable-decoder=movtext --enable-decoder=mpl2 --enable-decoder=pjs --enable-decoder=realtext --enable-decoder=sami --enable-decoder=ssa --enable-decoder=stl --enable-decoder=subrip --enable-decoder=subviewer --enable-decoder=text --enable-decoder=vplayer --enable-decoder=webvtt --enable-decoder=xsub --enable-decoder=eac3_eae --enable-decoder=truehd_eae --enable-decoder=mlp_eae --enable-encoder=flac --enable-encoder=alac --enable-encoder=libvorbis --enable-encoder=libopus --enable-encoder=mjpeg --enable-encoder=png --enable-encoder=rawvideo --enable-encoder=wrapped_avframe --enable-encoder=ass --enable-encoder=dvbsub --enable-encoder=dvdsub --enable-encoder=movtext --enable-encoder=ssa --enable-encoder=subrip --enable-encoder=text --enable-encoder=webvtt --enable-encoder=xsub --enable-encoder=pcm_f32be --enable-encoder=pcm_f32le --enable-encoder=pcm_f64be --enable-encoder=pcm_f64le --enable-encoder=pcm_s8 --enable-encoder=pcm_s8_planar --enable-encoder=pcm_s16be --enable-encoder=pcm_s16be_planar --enable-encoder=pcm_s16le --enable-encoder=pcm_s16le_planar --enable-encoder=pcm_s24be --enable-encoder=pcm_s24le --enable-encoder=pcm_s24le_planar --enable-encoder=pcm_s32be --enable-encoder=pcm_s32le --enable-encoder=pcm_s32le_planar --enable-encoder=pcm_u8 --enable-encoder=pcm_u16be --enable-encoder=pcm_u16le --enable-encoder=pcm_u24be --enable-encoder=pcm_u24le --enable-encoder=pcm_u32be --enable-encoder=pcm_u32le --enable-encoder=h264_vaapi --enable-encoder=hevc_vaapi --enable-encoder=h264_nvenc --enable-encoder=eac3_eae --enable-hwaccel=av1_vaapi --enable-hwaccel=av1_nvdec --prefix=/data/jenkins/conan_build/1113394794/conan/.conan/data/ffmpeg/2.0-90a317cbea6-0/plex/stable/build/35580104f2e7e21894e079d82636f00077f193b2/transcoder-install --enable-libzvbi --enable-openssl --enable-libass --enable-libopus --enable-libvorbis --enable-libdav1d --extra-cflags='-m64 -O3 -fdata-sections -ffunction-sections -fno-omit-frame-pointer -g2 -gdwarf-4 -fcommon -flto=thin -fwhole-program-vtables -I/data/jenkins/conan_build/1113394794/conan/.conan/data/opus/1.2.1-35/plex/stable/package/64edc78a49b81c2615dad7b22a9ac90cc029860a/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/libvorbis/1.3.5-39/plex/stable/package/76eba14299c6c14bf4759b1da21aec07c9ca1a2f/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/dav1d/1.0.0-15/plex/stable/package/4d954bcc6be6a68b775ef1b1bae9dd65e4e237ff/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/ffnvcodec/11.0.10.3-a62a66f-2/plex/stable/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/openssl/3.1.1-2cf4e90-1/plex/stable/package/121b5d655884b039b2c06c747f3d73ef7b698b66/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/x264/161-1086f45-29/plex/stable/package/64edc78a49b81c2615dad7b22a9ac90cc029860a/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/zvbi/0.2.35-61/plex/stable/package/7366a567f554439fb9e7a3415c9d1c2ea2b75360/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/libass/0.16.0-6/plex/stable/package/d81dbed7e8ad560c9ec55240308ceb55b203927d/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/mp3lame/3.98.4-34/plex/stable/package/64edc78a49b81c2615dad7b22a9ac90cc029860a/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/intel-media-driver/23.1.6-4/plex/stable/package/3875a0d8ecb43ec019597d9b6eb2624caf21f56d/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/libogg/1.3.2-35/plex/stable/package/64edc78a49b81c2615dad7b22a9ac90cc029860a/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/iconv/1.16-33/plex/stable/package/da4999666f4b1709dd93ae40fffdb2c6f130b23f/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/fribidi/1.0.12-3/plex/stable/package/464531ac2a3f2ab2167bd10d1214603bc8116983/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/harfbuzz/4.2.1-5/plex/stable/package/53415d552ac96104f622ffa8d8530937a40b4271/include/harfbuzz -I/data/jenkins/conan_build/1113394794/conan/.conan/data/libva/2.18.0-3/plex/stable/package/f0f4893209b867ce448a96e25ef4d6b158311557/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/intel-gmmlib/22.3.5-2/plex/stable/package/d7d5d1f35ff92a8c39da6b47605055e839a42a9c/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/opencl-headers/v2022.01.04-59ac4dc-3/plex/stable/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/fontconfig/2.14.0-5/plex/stable/package/aacc2a7710dfa87ed80d4eea45b80c93243fe456/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/libdrm/2.4.115-3/plex/stable/package/73ee780ba6ea3ef381da6e7f229c475bfaf477ca/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/libdrm/2.4.115-3/plex/stable/package/73ee780ba6ea3ef381da6e7f229c475bfaf477ca/include/libdrm -I/data/jenkins/conan_build/1113394794/conan/.conan/data/libxml2/2.9.11-e1bcffea-14/plex/stable/package/33406d37abb556848190dcd6097a9849aa894baf/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/libxml2/2.9.11-e1bcffea-14/plex/stable/package/33406d37abb556848190dcd6097a9849aa894baf/include/libxml2 -I/data/jenkins/conan_build/1113394794/conan/.conan/data/freetype2/2.12.1-27/plex/stable/package/82a00e1e4cc2e8878bb79ae9b5e2235fd8280e6a/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/expat/2.2.5-36/plex/stable/package/64edc78a49b81c2615dad7b22a9ac90cc029860a/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/libuuid/1.0.3-29/plex/stable/package/841d526523d3550ac4d52807df94cbbedce37e2c/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/libpciaccess/0.17-2/plex/stable/package/7763a87432c78a82fd36373080b064286892cea3/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/bzip2/1.0.6-39/plex/stable/package/618bb3c469051b52e1349cf1a297263df374d15a/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/libpng/1.6.37-42/plex/stable/package/33406d37abb556848190dcd6097a9849aa894baf/include -I/data/jenkins/conan_build/1113394794/conan/.conan/data/zlib/1.2.11-33/plex/stable/package/64edc78a49b81c2615dad7b22a9ac90cc029860a/include -DLIBXML_STATIC -DFRIBIDI_LIB_STATIC -DNDEBUG'
2024-02-11T17:31:21.182614609Z   libavutil      57. 24.101 / 57. 24.101
2024-02-11T17:31:21.183113825Z   libavcodec     59. 25.100 / 59. 25.100
2024-02-11T17:31:21.183488287Z   libavformat    59. 20.101 / 59. 20.101
2024-02-11T17:31:21.183832382Z   libavfilter     8. 29.100 /  8. 29.100
2024-02-11T17:31:21.184157622Z   libswscale      6.  6.100 /  6.  6.100
2024-02-11T17:31:21.184456973Z   libswresample   4.  6.100 /  4.  6.100
2024-02-11T17:31:21.184910023Z Rescanning for external libs: '/codecs/90a317c-4653-linux-x86_64-standard/'
2024-02-11T17:31:21.185358664Z Loading external lib /codecs/90a317c-4653-linux-x86_64-standard/libadpcm_ea_r1_decoder.so
2024-02-11T17:31:21.185846197Z Loading external lib /codecs/90a317c-4653-linux-x86_64-standard/libvmdvideo_decoder.so
2024-02-11T17:31:21.186241158Z Loading external lib /codecs/90a317c-4653-linux-x86_64-standard/libvp5_decoder.so
...
... LOADING MORE EXTERNAL LIBS
...
2024-02-11T17:31:21.336694696Z Input #0, matroska,webm, from '/data/TV/Taskmaster (2015)/Season 13/Taskmaster S13E09 [HDTV-720p][AAC 2.0][x265]-MeGusta.mkv':
2024-02-11T17:31:21.337043179Z   Metadata:
2024-02-11T17:31:21.337539370Z     ENCODER         : Lavf59.17.101
2024-02-11T17:31:21.338010222Z   Duration: 00:46:49.85, start: 0.000000, bitrate: 896 kb/s
2024-02-11T17:31:21.338415352Z   Stream #0:0(eng): Video: hevc (Main 10), 1 reference frame, yuv420p10le(tv, bt709, progressive, left), 1280x720 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 1k tbn (default)
2024-02-11T17:31:21.338845048Z     Metadata:
2024-02-11T17:31:21.339193060Z       BPS             : 1999558
2024-02-11T17:31:21.339601806Z       NUMBER_OF_FRAMES: 70246
2024-02-11T17:31:21.339933418Z       NUMBER_OF_BYTES : 702305022
2024-02-11T17:31:21.340269749Z       _STATISTICS_WRITING_APP: mkvmerge v68.0.0 ('The Curtain') 64-bit
2024-02-11T17:31:21.340636085Z       _STATISTICS_WRITING_DATE_UTC: 2022-06-10 14:48:30
2024-02-11T17:31:21.341002502Z       _STATISTICS_TAGS: BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES
2024-02-11T17:31:21.341308276Z       ENCODER         : Lavc59.20.100 libx265
2024-02-11T17:31:21.341650047Z       DURATION        : 00:46:49.840000000
2024-02-11T17:31:21.342001445Z   Stream #0:1(eng): Audio: aac (LC), 48000 Hz, stereo, fltp (default)
2024-02-11T17:31:21.342370627Z     Metadata:
2024-02-11T17:31:21.342764516Z       title           : English
2024-02-11T17:31:21.343059349Z       BPS             : 128000
2024-02-11T17:31:21.343425896Z       NUMBER_OF_FRAMES: 131714
2024-02-11T17:31:21.343729786Z       NUMBER_OF_BYTES : 44957815
2024-02-11T17:31:21.344135236Z       _STATISTICS_WRITING_APP: mkvmerge v68.0.0 ('The Curtain') 64-bit
2024-02-11T17:31:21.344494580Z       _STATISTICS_WRITING_DATE_UTC: 2022-06-10 14:48:30
2024-02-11T17:31:21.344966625Z       _STATISTICS_TAGS: BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES
2024-02-11T17:31:21.345272698Z       DURATION        : 00:46:49.854000000
2024-02-11T17:31:21.345578181Z   Stream #0:2(eng): Subtitle: ass
2024-02-11T17:31:21.345842406Z     Metadata:
2024-02-11T17:31:21.346105509Z       title           : English
2024-02-11T17:31:21.346436550Z       BPS             : 200
2024-02-11T17:31:21.346790684Z       NUMBER_OF_FRAMES: 950
2024-02-11T17:31:21.347153694Z       NUMBER_OF_BYTES : 70010
2024-02-11T17:31:21.347505163Z       _STATISTICS_WRITING_APP: mkvmerge v68.0.0 ('The Curtain') 64-bit
2024-02-11T17:31:21.347835101Z       _STATISTICS_WRITING_DATE_UTC: 2022-06-10 14:48:30
2024-02-11T17:31:21.348131146Z       _STATISTICS_TAGS: BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES
2024-02-11T17:31:21.348511780Z       ENCODER         : Lavc59.20.100 ssa
2024-02-11T17:31:21.348770104Z       DURATION        : 00:46:43.160000000
2024-02-11T17:31:21.349708964Z [dash @ 0x7f80798cf680] No bit rate set for stream 0
2024-02-11T17:31:21.350112972Z [dash @ 0x7f80798cf680] Opening 'init-stream0.m4s' for writing
2024-02-11T17:31:21.350869891Z [mp4 @ 0x7f80798cf380] Empty MOOV enabled; disabling automatic bitstream filtering
2024-02-11T17:31:21.351236017Z [dash @ 0x7f80798cf680] Representation 0 init segment will be written to: init-stream0.m4s
2024-02-11T17:31:21.351545337Z [dash @ 0x7f80798cf680] No bit rate set for stream 1
2024-02-11T17:31:21.351935699Z [dash @ 0x7f80798cf680] Opening 'init-stream1.m4s' for writing
2024-02-11T17:31:21.352794579Z [mp4 @ 0x7f8079a27040] Empty MOOV enabled; disabling automatic bitstream filtering
2024-02-11T17:31:21.353093900Z [dash @ 0x7f80798cf680] Representation 1 init segment will be written to: init-stream1.m4s
2024-02-11T17:31:21.353401086Z Output #0, dash, to 'dash':
2024-02-11T17:31:21.353743949Z   Metadata:
2024-02-11T17:31:21.354108913Z     encoder         : Lavf59.20.101
2024-02-11T17:31:21.354595886Z   Stream #0:0(eng): Video: hevc (Main 10), 1 reference frame, yuv420p10le(tv, bt709, progressive, left), 1280x720 (0x0) [SAR 1:1 DAR 16:9], q=2-31, 25 fps, 25 tbr, 12800 tbn (default)
2024-02-11T17:31:21.355088329Z   Stream #0:1(eng): Audio: aac (LC), 48000 Hz, stereo, fltp (default)
2024-02-11T17:31:21.355477428Z Stream mapping:
2024-02-11T17:31:21.356196326Z   Stream #0:0 -> #0:0 (copy)
2024-02-11T17:31:21.356621212Z   Stream #0:1 -> #0:1 (copy)
2024-02-11T17:31:21.357230514Z Press [q] to stop, [?] for help
2024-02-11T17:31:21.358389908Z [AVIOContext @ 0x7f8080508680] Statistics: 3279 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:21.359321394Z [dash @ 0x7f80798cf680] Opening 'chunk-stream0-00001.m4s.tmp' for writing
2024-02-11T17:31:21.360495155Z [AVIOContext @ 0x7f8079a1a180] Statistics: 742 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:21.361370726Z [dash @ 0x7f80798cf680] Opening 'chunk-stream1-00001.m4s.tmp' for writing
2024-02-11T17:31:21.370694775Z [AVIOContext @ 0x7f8079a1a640] Statistics: 1192493 bytes written, 0 seeks, 5 writeouts
2024-02-11T17:31:21.383264699Z [dash @ 0x7f80798cf680] Representation 0 media segment 2 written to: chunk-stream0-00001.m4s
2024-02-11T17:31:21.384391141Z [AVIOContext @ 0x7f8080508680] Statistics: 162222 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:21.386910243Z [dash @ 0x7f80798cf680] Representation 1 media segment 2 written to: chunk-stream1-00001.m4s
2024-02-11T17:31:21.387281018Z [dash @ 0x7f80798cf680] Opening 'http://10.147.52.128:32400/video/:/transcode/session/rjy0gs7c2l8huq7mkef4wlec/7cda6ffc-f7c4-408e-8f40-71f528a57e68/manifest?X-Plex-Http-Pipeline=infinite' for writing
2024-02-11T17:31:21.387962856Z [tcp @ 0x7f807f369e80] Starting connection attempt to 10.147.52.128 port 32400
2024-02-11T17:31:21.388885506Z [tcp @ 0x7f807f369e80] Successfully connected to 10.147.52.128 port 32400
2024-02-11T17:31:21.389612669Z [AVIOContext @ 0x7f8080508680] Statistics: 2033 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:21.390195151Z [dash @ 0x7f80798cf680] Opening 'chunk-stream0-00002.m4s.tmp' for writing
2024-02-11T17:31:21.391320581Z [dash @ 0x7f80798cf680] Opening 'chunk-stream1-00002.m4s.tmp' for writing
2024-02-11T17:31:21.403066932Z [AVIOContext @ 0x7f8079a1a180] Statistics: 1684085 bytes written, 0 seeks, 7 writeouts
2024-02-11T17:31:21.421098816Z [dash @ 0x7f80798cf680] Representation 0 media segment 3 written to: chunk-stream0-00002.m4s
2024-02-11T17:31:21.422055539Z [AVIOContext @ 0x7f8080508680] Statistics: 164395 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:21.424553872Z [dash @ 0x7f80798cf680] Representation 1 media segment 3 written to: chunk-stream1-00002.m4s
2024-02-11T17:31:21.424875265Z [dash @ 0x7f80798cf680] Opening 'http://10.147.52.128:32400/video/:/transcode/session/rjy0gs7c2l8huq7mkef4wlec/7cda6ffc-f7c4-408e-8f40-71f528a57e68/manifest?X-Plex-Http-Pipeline=infinite' for writing
2024-02-11T17:31:21.425264695Z [tcp @ 0x7f807ec20280] Starting connection attempt to 10.147.52.128 port 32400
2024-02-11T17:31:21.425793025Z [tcp @ 0x7f807ec20280] Successfully connected to 10.147.52.128 port 32400
2024-02-11T17:31:21.426251274Z [AVIOContext @ 0x7f8079a1a1c0] Statistics: 2079 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:21.426873791Z [dash @ 0x7f80798cf680] Opening 'chunk-stream0-00003.m4s.tmp' for writing
2024-02-11T17:31:21.427642693Z [dash @ 0x7f80798cf680] Opening 'chunk-stream1-00003.m4s.tmp' for writing
2024-02-11T17:31:21.435543194Z [AVIOContext @ 0x7f8080508680] Statistics: 1320913 bytes written, 0 seeks, 6 writeouts
2024-02-11T17:31:21.450062423Z [dash @ 0x7f80798cf680] Representation 0 media segment 4 written to: chunk-stream0-00003.m4s
2024-02-11T17:31:21.450913829Z [AVIOContext @ 0x7f8079a1a1c0] Statistics: 164684 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:21.453862356Z [dash @ 0x7f80798cf680] Representation 1 media segment 4 written to: chunk-stream1-00003.m4s
2024-02-11T17:31:21.454202985Z [dash @ 0x7f80798cf680] Opening 'http://10.147.52.128:32400/video/:/transcode/session/rjy0gs7c2l8huq7mkef4wlec/7cda6ffc-f7c4-408e-8f40-71f528a57e68/manifest?X-Plex-Http-Pipeline=infinite' for writing
2024-02-11T17:31:21.454744229Z [tcp @ 0x7f807ec20280] Starting connection attempt to 10.147.52.128 port 32400
2024-02-11T17:31:21.455434544Z [tcp @ 0x7f807ec20280] Successfully connected to 10.147.52.128 port 32400
2024-02-11T17:31:21.455941915Z [AVIOContext @ 0x7f8079a1a1c0] Statistics: 2108 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:21.456370589Z [dash @ 0x7f80798cf680] Opening 'chunk-stream0-00004.m4s.tmp' for writing
2024-02-11T17:31:21.457216434Z [dash @ 0x7f80798cf680] Opening 'chunk-stream1-00004.m4s.tmp' for writing
2024-02-11T17:31:21.463191405Z [AVIOContext @ 0x7f8079a1a640] Statistics: 676277 bytes written, 0 seeks, 3 writeouts
2024-02-11T17:31:21.471718170Z [dash @ 0x7f80798cf680] Representation 0 media segment 5 written to: chunk-stream0-00004.m4s
2024-02-11T17:31:21.472402442Z [AVIOContext @ 0x7f8080508680] Statistics: 163327 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:21.475206238Z [dash @ 0x7f80798cf680] Representation 1 media segment 5 written to: chunk-stream1-00004.m4s
2024-02-11T17:31:21.475592342Z [dash @ 0x7f80798cf680] Opening 'http://10.147.52.128:32400/video/:/transcode/session/rjy0gs7c2l8huq7mkef4wlec/7cda6ffc-f7c4-408e-8f40-71f528a57e68/manifest?X-Plex-Http-Pipeline=infinite' for writing
2024-02-11T17:31:21.476013912Z [tcp @ 0x7f807ec20280] Starting connection attempt to 10.147.52.128 port 32400
2024-02-11T17:31:21.476438398Z [tcp @ 0x7f807ec20280] Successfully connected to 10.147.52.128 port 32400
2024-02-11T17:31:21.476881809Z [AVIOContext @ 0x7f8079a1a180] Statistics: 2154 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:21.477410129Z [dash @ 0x7f80798cf680] Opening 'chunk-stream0-00005.m4s.tmp' for writing
2024-02-11T17:31:21.478059637Z [dash @ 0x7f80798cf680] Opening 'chunk-stream1-00005.m4s.tmp' for writing
2024-02-11T17:31:21.484143242Z [AVIOContext @ 0x7f8079a1a640] Statistics: 698840 bytes written, 0 seeks, 3 writeouts
2024-02-11T17:31:21.492342384Z [dash @ 0x7f80798cf680] Representation 0 media segment 6 written to: chunk-stream0-00005.m4s
2024-02-11T17:31:21.493338832Z [AVIOContext @ 0x7f8080508680] Statistics: 164063 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:21.495911414Z [dash @ 0x7f80798cf680] Representation 1 media segment 6 written to: chunk-stream1-00005.m4s
2024-02-11T17:31:21.496326612Z [dash @ 0x7f80798cf680] Opening 'http://10.147.52.128:32400/video/:/transcode/session/rjy0gs7c2l8huq7mkef4wlec/7cda6ffc-f7c4-408e-8f40-71f528a57e68/manifest?X-Plex-Http-Pipeline=infinite' for writing
2024-02-11T17:31:21.496933349Z [tcp @ 0x7f807ec3a680] Starting connection attempt to 10.147.52.128 port 32400
2024-02-11T17:31:21.497906434Z [tcp @ 0x7f807ec3a680] Successfully connected to 10.147.52.128 port 32400
...
... [THIS GOES ON FOR NO ISSUES]
...
2024-02-11T17:31:28.338723642Z [dash @ 0x7f80798cf680] Opening 'chunk-stream0-00280.m4s.tmp' for writing
2024-02-11T17:31:28.339494026Z [dash @ 0x7f80798cf680] Opening 'chunk-stream1-00280.m4s.tmp' for writing
2024-02-11T17:31:28.347713395Z [AVIOContext @ 0x7f8080508680] Statistics: 1359785 bytes written, 0 seeks, 6 writeouts
2024-02-11T17:31:28.361056147Z [dash @ 0x7f80798cf680] Representation 0 media segment 281 written to: chunk-stream0-00280.m4s
2024-02-11T17:31:28.361971483Z [AVIOContext @ 0x7f8079a1a640] Statistics: 164009 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:28.364777925Z [dash @ 0x7f80798cf680] Representation 1 media segment 281 written to: chunk-stream1-00280.m4s
2024-02-11T17:31:28.365224151Z [dash @ 0x7f80798cf680] Opening 'http://10.147.52.128:32400/video/:/transcode/session/rjy0gs7c2l8huq7mkef4wlec/7cda6ffc-f7c4-408e-8f40-71f528a57e68/manifest?X-Plex-Http-Pipeline=infinite' for writing
2024-02-11T17:31:28.365924083Z [tcp @ 0x7f807f43c080] Starting connection attempt to 10.147.52.128 port 32400
2024-02-11T17:31:28.366561488Z [tcp @ 0x7f807f43c080] Successfully connected to 10.147.52.128 port 32400
2024-02-11T17:31:28.367143459Z [AVIOContext @ 0x7f8079a1a640] Statistics: 2219 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:28.367843130Z [dash @ 0x7f80798cf680] Opening 'chunk-stream0-00281.m4s.tmp' for writing
2024-02-11T17:31:28.368921081Z [dash @ 0x7f80798cf680] Opening 'chunk-stream1-00281.m4s.tmp' for writing
2024-02-11T17:31:28.377039009Z No more output streams to write to, finishing.
2024-02-11T17:31:28.379953073Z [AVIOContext @ 0x7f8080508680] Statistics: 1394586 bytes written, 0 seeks, 6 writeouts
2024-02-11T17:31:28.394024302Z [dash @ 0x7f80798cf680] Representation 0 media segment 282 written to: chunk-stream0-00281.m4s
2024-02-11T17:31:28.395100539Z [AVIOContext @ 0x7f8079a1a640] Statistics: 162798 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:28.397850284Z [dash @ 0x7f80798cf680] Representation 1 media segment 282 written to: chunk-stream1-00281.m4s
2024-02-11T17:31:28.398219526Z [dash @ 0x7f80798cf680] Opening 'http://10.147.52.128:32400/video/:/transcode/session/rjy0gs7c2l8huq7mkef4wlec/7cda6ffc-f7c4-408e-8f40-71f528a57e68/manifest?X-Plex-Http-Pipeline=infinite' for writing
2024-02-11T17:31:28.398660713Z [tcp @ 0x7f807f180f40] Starting connection attempt to 10.147.52.128 port 32400
2024-02-11T17:31:28.399194264Z [tcp @ 0x7f807f180f40] Successfully connected to 10.147.52.128 port 32400
2024-02-11T17:31:28.399702376Z [AVIOContext @ 0x7f8079a1a4c0] Statistics: 2088 bytes written, 0 seeks, 1 writeouts
2024-02-11T17:31:28.400704786Z frame=70246 fps=9974 q=-1.0 Lsize=N/A time=00:43:30.05 bitrate=N/A speed= 371x    
2024-02-11T17:31:28.401034484Z video:262205kB audio:43904kB subtitle:0kB other streams:0kB global headers:2kB muxing overhead: unknown
2024-02-11T17:31:28.401322524Z Input file #0 (/data/TV/Taskmaster (2015)/Season 13/Taskmaster S13E09 [HDTV-720p][AAC 2.0][x265]-MeGusta.mkv):
2024-02-11T17:31:28.401643996Z   Input stream #0:0 (video): 70246 packets read (268497537 bytes); 
2024-02-11T17:31:28.401979285Z   Input stream #0:1 (audio): 131714 packets read (44957815 bytes); 
2024-02-11T17:31:28.402289546Z   Input stream #0:2 (subtitle): 0 packets read (0 bytes); 
2024-02-11T17:31:28.402608494Z   Total: 201960 packets (313455352 bytes) demuxed
2024-02-11T17:31:28.402930878Z Output file #0 (dash):
2024-02-11T17:31:28.403214931Z   Output stream #0:0 (video): 70246 packets muxed (268497537 bytes); 
2024-02-11T17:31:28.403527667Z   Output stream #0:1 (audio): 131714 packets muxed (44957815 bytes); 
2024-02-11T17:31:28.403867203Z   Total: 201960 packets (313455352 bytes) muxed
2024-02-11T17:31:28.404432824Z [AVIOContext @ 0x7f8080508040] Statistics: 315102100 bytes read, 3 seeks
2024-02-11T17:31:28.409421085Z Transcoder exit: child process exited with code 0
2024-02-11T17:31:28.409454739Z Completed transcode
2024-02-11T17:31:28.409626561Z Removing process from taskMap
2024-02-11T17:31:28.409659232Z Transcoder close: child process exited with code 0

Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale Issue has been inactive for more than 30 days label Mar 13, 2024
Copy link

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 28, 2024
@pabloromeo pabloromeo reopened this Mar 28, 2024
@github-actions github-actions bot removed the stale Issue has been inactive for more than 30 days label Mar 29, 2024
@evanrich
Copy link
Contributor

evanrich commented Apr 27, 2024

bringing this back up as I'm suffering similar issues. Is there a suggested work around for the remote workers to get the cache/drivers folder for the gpu drivers?

Edit: Looked like @audiophonicz solution worked for me as well.

Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale Issue has been inactive for more than 30 days label May 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale Issue has been inactive for more than 30 days
Projects
None yet
Development

No branches or pull requests