Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provisioner on k3s switches namespace, config after reboot #232

Closed
samstride opened this issue Feb 20, 2022 · 7 comments
Closed

Provisioner on k3s switches namespace, config after reboot #232

samstride opened this issue Feb 20, 2022 · 7 comments
Labels

Comments

@samstride
Copy link

Hi,

Thanks for maintaining this repo.

I have installed the provisioner on a single node k3s cluster (home lab) using:

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

The provisioner originally installs itself in namespace local-path-storage.

However, after a node restart (power outage), the provisioner has deployments and configs setup in kube-system and the original deployment fails with an error regarding service account permissions. Sorry, don't have the exact message handy.

Thanks.

@derekbit
Copy link
Member

@samstride
Can you show the pods in your system by k get pods -A | grep local-path-provisioner?
BTW, k3s has already embedded the local-path-provisioner, so you can use it directly.

@samstride
Copy link
Author

samstride commented Mar 16, 2022

@derekbit , thanks for responding.

When I installed k3s originally, don't think the local provisioner got installed.

I reinstalled the provisioner in kube-system now.

Here is the output you requested:

kubectl get pods -A | grep local-path-provisioner

kube-system            local-path-provisioner-84bb864455-47fv4        1/1     Running     2 (2d16h ago)   13d

Another thing that happens after a restart, is that the configmap seems to change.

I originally had something like this:

  config.json: |-
    {
            "nodePathMap":[
            {
                    "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                    "paths":["/opt/local-path-provisioner"]
            }
            ]
    }

Then after a node restart, it changed to this:

config.json: |-
   {
           "nodePathMap":[
           {
                   "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                   "paths":["/var/lib/rancher/k3s/storage"]
           }
           ]
   }

There is a small possibility that the k3s version might have upgraded from v1.22.6+k3s1 to v1.22.7+k3s1 before the reboot. Not sure if that might have caused this issue around configmaps.

@samstride samstride changed the title Provisioner on k3s switches namespace after reboot Provisioner on k3s switches namespace, config after reboot Mar 16, 2022
@chenyg0911
Copy link

some problem with "v1.22.5+k3s1". I update the local-path-config configmap to use the other disk volume other than the default patch. When retart the pods "local-path-provisioner-xxxx", it OK. But when restart the k3s, it restore to default path "/var/lib/rancher/k3s/storage". I also try to update the config under /var/lib/rancher/k3s/server/manifests/local-storage.yaml. the effect is the same, when restart k3s, it also restore to default path.

@harryzcy
Copy link

harryzcy commented Mar 3, 2023

@chenyg0911 The default path in k3s is set by --default-local-storage-path cli argument when starting k3s. If the argument is omitted, it's defaulted to "/var/lib/rancher/k3s/storage".

@gb-123-git
Copy link

Is there any way to make the configuration stick other than restart k3s with --default-local-storage-path ? Is there a possibility of creating something like local-storage-custom.yaml that over-rides the default file for path ?

Copy link

github-actions bot commented Jul 3, 2024

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale label Jul 3, 2024
Copy link

github-actions bot commented Jul 8, 2024

This issue was closed because it has been stalled for 5 days with no activity.

@github-actions github-actions bot closed this as completed Jul 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants