Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does local-path super multiple sc for deffrence performance disk #254

Open
zhfg opened this issue Jun 25, 2022 · 11 comments
Open

Does local-path super multiple sc for deffrence performance disk #254

zhfg opened this issue Jun 25, 2022 · 11 comments
Labels

Comments

@zhfg
Copy link

zhfg commented Jun 25, 2022

I have hdd and ssd installed in host server.
I want create two k8s storageclass to support for both hdd and ssd to adapt different performance requirement.
Is local-path supported this and how to do with it

@wojtasool
Copy link

I would say that you can create 2 or more installations of Local Path Provisioners each with different disks configured and with different names for each Storage Class. This way you can reference different disk types using specific SC.

@zhfg
Copy link
Author

zhfg commented Jun 30, 2022

Thanks for your replay,
I can not found any documents about what you told me. for example, how to define provisioner myself.
I will do more research. and left a update i get any move forward

@qianyidui
Copy link

apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage-ssd


apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage-ssd


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role-ssd
rules:

  • apiGroups: [ "" ]
    resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
    verbs: [ "get", "list", "watch" ]
  • apiGroups: [ "" ]
    resources: [ "endpoints", "persistentvolumes", "pods" ]
    verbs: [ "*" ]
  • apiGroups: [ "" ]
    resources: [ "events" ]
    verbs: [ "create", "patch" ]
  • apiGroups: [ "storage.k8s.io" ]
    resources: [ "storageclasses" ]
    verbs: [ "get", "list", "watch" ]

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind-ssd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role-ssd
subjects:

  • kind: ServiceAccount
    name: local-path-provisioner-service-account
    namespace: local-path-storage-ssd

apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner-ssd
namespace: local-path-storage-ssd
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.22
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: PROVISIONER_NAME
value: rancher.io/local-path-ssd
volumes:
- name: config-volume
configMap:
name: local-path-config


apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path-ssd
provisioner: rancher.io/local-path-ssd
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete


kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage-ssd
data:
config.json: |-
{
"nodePathMap":[
{
"node":"k3s",
"paths":["/raid1"]
}
]
}
setup: |-
#!/bin/sh
set -eu
mkdir -m 0777 -p "$VOL_DIR"
teardown: |-
#!/bin/sh
set -eu
rm -rf "$VOL_DIR"
helperPod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
containers:
- name: helper-pod
image: busybox
imagePullPolicy: IfNotPresent

1 similar comment
@qianyidui
Copy link

apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage-ssd


apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage-ssd


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role-ssd
rules:

  • apiGroups: [ "" ]
    resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
    verbs: [ "get", "list", "watch" ]
  • apiGroups: [ "" ]
    resources: [ "endpoints", "persistentvolumes", "pods" ]
    verbs: [ "*" ]
  • apiGroups: [ "" ]
    resources: [ "events" ]
    verbs: [ "create", "patch" ]
  • apiGroups: [ "storage.k8s.io" ]
    resources: [ "storageclasses" ]
    verbs: [ "get", "list", "watch" ]

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind-ssd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role-ssd
subjects:

  • kind: ServiceAccount
    name: local-path-provisioner-service-account
    namespace: local-path-storage-ssd

apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner-ssd
namespace: local-path-storage-ssd
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.22
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: PROVISIONER_NAME
value: rancher.io/local-path-ssd
volumes:
- name: config-volume
configMap:
name: local-path-config


apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path-ssd
provisioner: rancher.io/local-path-ssd
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete


kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage-ssd
data:
config.json: |-
{
"nodePathMap":[
{
"node":"k3s",
"paths":["/raid1"]
}
]
}
setup: |-
#!/bin/sh
set -eu
mkdir -m 0777 -p "$VOL_DIR"
teardown: |-
#!/bin/sh
set -eu
rm -rf "$VOL_DIR"
helperPod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
containers:
- name: helper-pod
image: busybox
imagePullPolicy: IfNotPresent

@qianyidui
Copy link

qianyidui commented Jul 19, 2022

env:

  • name: POD_NAMESPACE
    valueFrom:
    fieldRef:
    fieldPath: metadata.namespace
  • name: PROVISIONER_NAME
    value: rancher.io/local-path-ssd

@lholota
Copy link

lholota commented Nov 22, 2022

I'm currently hitting the same thing, tbh this feels like it's semantically wrong. The point of a storage class is exactly to define classes of storage (i.e. fast=ssd, slower=hdd etc.) which may differ in parameters/options.

I believe the provisioner should take the paths from the storage class definition rather than from the centralized config map. This way the paths/sharedFileSystemPath could be defined per storageClass and you don't need to have multiple provisioner pods running inside of the same cluster (in my case it's 5 installations of the provisioner which seems rather cumbersome...)

A way to implement this in backward compatible way would be to let the user define these fields in both places and letting the storageClass behave as an override of the default values found in the configMap.

@lholota
Copy link

lholota commented Jan 23, 2023

Hi all, any chance the suggestion above would be considered or is it too much of a blast into current design?

@LinkMaq
Copy link

LinkMaq commented Feb 22, 2023

I think the local-path-provisioner was designed to face simple scenarios from the beginning. If in a kubernetes cluster there are multiple performance disks provided through storageClass, using openebs/local-hostpath should be more reasonable.

@ChristianCiach
Copy link

This should be fixed by #306, I think.

@laurivosandi
Copy link

The setup and teardown scripts, helper pod image can also differ per storage class thus #306 is short-sighted solution. I would recommend giving up the ConfigMap as it is at the moment and move the scrpits into StorageClass definition

Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale label Jun 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants