Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support to attach volumes and volumeMounts to the apply and destroy pod. #357

Open
LochanRn opened this issue Mar 4, 2023 · 11 comments
Open

Comments

@LochanRn
Copy link

LochanRn commented Mar 4, 2023

Ability to support volumes and volumeMount to the terraform Job.

This allows the user to mount files to the pod using a secret or a configmap which can further be used by the terraform executor pod to while creating the resource.
For example if the user wants to deploy a vm with ability to pass the cloud-init file to the vm.

@chivalryq
Copy link
Member

Hi @LochanRn. Are you using terraform-controller to create vm-like cloud resource and encountered the related situations? PR is always welcomed.

@LochanRn
Copy link
Author

LochanRn commented Mar 7, 2023

@chivalryq yes I would like to raise a PR for the same and planning to keep the design something as below please let me know if it is fine ?

// ConfigurationSpec defines the desired state of Configuration
type ConfigurationSpec struct {
        ...
	VolumeSpec VolumeSpec `json:"volumeSpec,omitempty"`
}

type VolumeSpec struct {
	Volumes []corev1.Volume `json:"volumes,omitempty"`
	VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"`
}

@chivalryq
Copy link
Member

Do we need to specify the phase(applying/destroying) for volumes to mount? IMO in most of cases we can assume the apply phase need a volume. But is there any chance that we need it in destroying phase? @LochanRn

@LochanRn
Copy link
Author

LochanRn commented Mar 7, 2023

So there is a use case where we need to run a shell script to clean up some of the external env. In that case during destroy we would require the volumeMount option during destroy as well :)

@chivalryq
Copy link
Member

So how about split the spec to

// ConfigurationSpec defines the desired state of Configuration
type ConfigurationSpec struct {
        ...
	Volumes Volumes `json:"volumeSpec,omitempty"`
}

type Volumes struct {
	Apply VolumeSpec
	Destroy VolumeSpec
}

type VolumeSpec struct {
	Volumes []corev1.Volume `json:"volumes,omitempty"`
	VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"`
}

@LochanRn
Copy link
Author

LochanRn commented Mar 7, 2023

Yes doable

@LochanRn
Copy link
Author

Hi i was just thinking about the above use-case and was thinking, what if the user wants the volumes in both apply and destroy pod then he will have to add a redundant block twice. @chivalryq WDYT ?

@chivalryq
Copy link
Member

I didn't think of that before. But yes it is possible. I found another question, you mentioned that there's a case that a script will be run after destroying. Who are responsible for call that script if we have mount that volumes to destroy job pod. IMO pre-start and post-destroy hooks can be what we need.

@LochanRn
Copy link
Author

before destroying the vm we run the script and we do that using the provisioner.

        provisioner "local-exec" {
          when    = destroy
          command =  "echo 'Executing pre vm destroy hook'; sleep 5s"
        }

@LochanRn
Copy link
Author

Switching back to this model to avoid redundancy.

// ConfigurationSpec defines the desired state of Configuration
type ConfigurationSpec struct {
        ...
	VolumeSpec VolumeSpec `json:"volumeSpec,omitempty"`
}

type VolumeSpec struct {
	Volumes []corev1.Volume `json:"volumes,omitempty"`
	VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"`
}

@chivalryq
Copy link
Member

Sure, it's also doable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants