-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8s: guest-pull: Kill all processes in container test fails when pulling the image inside the guest #9664
Comments
This test fails when using `shared_fs=none` with the nydus snapshotter, and we're tracking the issue here: kata-containers#9664 For now, let's have it skipped. Signed-off-by: Fabiano Fidêncio <[email protected]>
This test fails when using `shared_fs=none` with the nydus snapshotter, and we're tracking the issue here: kata-containers#9664 For now, let's have it skipped. Signed-off-by: Fabiano Fidêncio <[email protected]>
This test fails when using `shared_fs=none` with the nydus snapshotter, and we're tracking the issue here: kata-containers#9664 For now, let's have it skipped. Signed-off-by: Fabiano Fidêncio <[email protected]>
This test fails when using `shared_fs=none` with the nydus snapshotter, and we're tracking the issue here: kata-containers#9664 For now, let's have it skipped. Signed-off-by: Fabiano Fidêncio <[email protected]>
Hi @fidencio ! One question: did you get this failure when setting shared_fs=none and the runtime handler annotation? Asking because I'm getting this error even when shared_fs=9p (i.e. using 9p) too. I realized that because I'm run a set of tests for qemu-coco-dev with setting the runtime handler annotation ("io.containerd.cri.runtime-handler"). Another test that's failing and I haven't seen you report is k8s-empty-dirs.bats . Wondering why. |
Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <[email protected]>
Revert Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <[email protected]>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the system searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <[email protected]>
Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the system searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <[email protected]>
Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <[email protected]>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <[email protected]>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 kata-containers#9666 kata-containers#9667 kata-containers#9668 Signed-off-by: ChengyuZhu6 <[email protected]>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <[email protected]>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <[email protected]>
Revert code logic in 462051b Let me explain why: In our previous approach, we implemented guest pull by passing PullImageRequest to the guest. However, this method resulted in the loss of specifications essential for running the container, such as commands specified in YAML, during the CreateContainer stage. To address this, it is necessary to integrate the OCI specifications and process information from the image’s configuration with the container in guest pull. The snapshotter method does not care this issue. Nevertheless, a problem arises when two containers in the same pod attempt to pull the same image, like InitContainer. This is because the image service searches for the existing configuration, which resides in the guest. The configuration, associated with <image name, cid>, is stored in the directory /run/kata-containers/<cid>. Consequently, when the InitContainer finishes its task and terminates, the directory ceases to exist. As a result, during the creation of the application container, the OCI spec and process information cannot be merged due to the absence of the expected configuration file. Fix tests: - k8s-credentials-secrets.bats - k8s-file-volume.bats - k8s-nested-configmap-secret.bats - k8s-projected-volume.bats - k8s-volume.bats - k8s-shared-volume.bats - k8s-kill-all-process-in-container.bats - k8s-sysctls.bats Fixes: kata-containers#9664 Fixes: kata-containers#9666 Fixes: kata-containers#9667 Fixes: kata-containers#9668 Signed-off-by: ChengyuZhu6 <[email protected]>
This test fails with qemu-coco-dev configuration and guest-pull image pull. Issue: kata-containers#9664 Signed-off-by: Wainer dos Santos Moschetta <[email protected]>
This test fails when using `shared_fs=none` with the nydus napshotter, Issue tracked here: kata-containers#9664 Skipping for now. Signed-Off-By: Ryan Savino <[email protected]>
This test fails when using `shared_fs=none` with the nydus napshotter, Issue tracked here: kata-containers#9664 Skipping for now. Signed-Off-By: Ryan Savino <[email protected]>
This test fails when using `shared_fs=none` with the nydus napshotter, Issue tracked here: kata-containers#9664 Skipping for now. Signed-Off-By: Ryan Savino <[email protected]>
This test fails when using `shared_fs=none` with the nydus napshotter, Issue tracked here: kata-containers#9664 Skipping for now. Signed-Off-By: Ryan Savino <[email protected]>
This test fails when using `shared_fs=none` with the nydus snapshotter, and we're tracking the issue here: kata-containers#9664 For now, let's have it skipped. Signed-off-by: Fabiano Fidêncio <[email protected]>
This test fails with qemu-coco-dev configuration and guest-pull image pull. Issue: kata-containers#9664 Signed-off-by: Wainer dos Santos Moschetta <[email protected]>
This test fails when using `shared_fs=none` with the nydus napshotter, Issue tracked here: kata-containers#9664 Skipping for now. Signed-Off-By: Ryan Savino <[email protected]>
And then taking a look at the test itself, we see:
And when taking a look at the error, it makes me think that nydus is not properly being used for initContainers.
@ChengyuZhu6, mind to verify if it works on your end?
The text was updated successfully, but these errors were encountered: