Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add in OpenStack support for demos #64

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

wkonitzer
Copy link
Contributor

PR to add support for OpenStack demos.

Includes:

  • work to allow kind to function correctly with presales OpenStack cloud as it uses the same subnet.
  • work to allow it to be demo'd using MKE3 or Lens LDK (instead of Kind)

@Algeran
Copy link
Collaborator

Algeran commented Jan 20, 2025

Please remove from this PR everything related to the usage of the existing kind cluster. We specifically create a new cluster in these demos to simplify the installation and configuration of k0rdent. By the way, this is a good case - to show how you can install k0rdent in an existing cluster, but it is definitely beyond this PR and it needs to be additionally tested on various systems. You can create the separate PR and issue for this
Also, don't force any root or global system changes in commands

@@ -15,10 +15,19 @@ TARGET_NAMESPACE ?= blue
KIND_CLUSTER_NAME ?= k0rdent-management-local
KIND_KUBECTL_CONTEXT = kind-$(KIND_CLUSTER_NAME)

OPENSSL_DOCKER_IMAGE ?= alpine/openssl:3.3.2
OPENSSL_DOCKER_IMAGE ?= debian:bookworm-slim
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why did you change the image? Please describe the importance of using this image instead the previous

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The alpine package didn't work on my laptop, so I changed it to something that did work.

%helm: binary = helm
%yq: binary = yq
%yq: url = "https://github.com/mikefarah/yq/releases/download/$(YQ_VERSION)/yq_$(OS)_$(ARCH)"

.PHONY: kubectl
kubectl: $(LOCALBIN)/kubectl ## Install kubectl binary locally if necessary
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this for? The initial idea is that if any required binary is not found in PATH, it will be installed to the local bin directory (line 74) and all commands will use it from there. This block changes the behaviour. Please remove it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree with the old approach - either kubectl is already installed, or if it isn't, we install it properly. If we disagree with this, let's remove it entirely from the script and make it a prerequisite that needs to be installed correctly.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. what is the issue to install binaries in the local bin directory? It's a common practice for Go project, for example
  2. it's not always users have the root access on their machines. For example, we have the case when users that tried this demo launch the demo on the fresh AWS ec2 instance and work not under the root. This approach will fail. We can add some workarounds for such cases. Or force people to install all binaries (~ 6-7), link docs and make them read a lot of information about stuff they actually don't need to know if they just want to see how the k0rdent works. Or just download required binaries with versions that we tested on different platforms and use them from the local bin directory. From the UX perspective the last approach looks better to me

@@ -112,7 +130,7 @@ helm: $(LOCALBIN)/helm ## Install helm binary locally if necessary
HELM_INSTALL_SCRIPT ?= "https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3"
$(LOCALBIN)/helm: | $(LOCALBIN)
rm -f $(LOCALBIN)/helm-*
curl -s --fail $(HELM_INSTALL_SCRIPT) | USE_SUDO=false HELM_INSTALL_DIR=$(LOCALBIN) DESIRED_VERSION=$(HELM_VERSION) BINARY_NAME=helm PATH="$(LOCALBIN):$(PATH)" bash
curl -s --fail $(HELM_INSTALL_SCRIPT) | USE_SUDO=true bash
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All binary installations and usage are tested on various systems and architectures. If you really think that this change is required, please describe this change. Also, this is out of the scope of the PR and doesn't relate to the OpenStack support

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See the above comment. If we install packages for use, we should install them correctly, and if not, let the user work on that outside of this script.

check-kind-network: ## Ensure the Docker network kind is configured correctly
@if docker network inspect kind >/dev/null 2>&1; then \
NETWORK_SUBNET=$$(docker network inspect kind --format '{{(index .IPAM.Config 0).Subnet}}'); \
if [ "$${NETWORK_SUBNET}" = "172.18.0.0/16" ]; then \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you use the hardcoded 172.18.0.0/16 value? It's not the default CIDR for kind. How do you resolve situations when it's used by another system/network?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is our internal Mirantis subnet. On all my systems, Kind installs with this subnet, hence best to check and make sure it doesn't use it by configuring something else.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw some conversations to make this repo internal and for Mirantis usage only. But for now it's a public repo and anyone can use it. Not sure if it's a good idea to hardcode some internal Mirantis configurations here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not hardcoding Mirantis subnets per say, but avoiding a clash. It took me a fair bit of time to debug why OpenStack wasn't working on kind cluster vs Lens Desktop or my other clusters and it was because of this Kind configuration. If we leave it in, it's obvious what is going on, so people can change it if needed.

$(HELM) install kcm $(KCM_REPO) --version $(KCM_VERSION) -n $(KCM_NAMESPACE) --create-namespace
@set -o pipefail; \
helm_set_args=""; \
for attempt in 1 2 3; do \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion, this is a very complex solution to a rather simple problem. Instead of trying to forcefully install a release into a cluster, excluding any components on the fly due to conflicts, it would be better to add to the README a description of the k0rdent installation configuration and instructions on disabling the installation of Flux and/or cert manager, if they are already present in the cluster.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case, why not remove the step to install k0rdent and instead provide the helm commands in the README?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it could be a solution. My point was that adding to the repo some workaround that just to closes some gaps you faced atm is not a good idea.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I see, all questions that I have for this PR are around the case when you already have some cluster and decide to use it for the k0rdent installation. It's a good case but a bit out of scope of the initial idea. I think we can leave here only the openstack deployments and move the other part to the separate PR, where we can make the proper solution, test it on all systems and platforms


.PHONY: watch-k0rdent-deployment
watch-k0rdent-deployment: .check-binary-kubectl
watch-k0rdent-deployment: ## Monitor k0rdent deployment
@while true; do\
echo "Checking if management object exists..."; \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line will be shown many times even if we already know that the Management objects exists. Don't think it's really necessary

@@ -0,0 +1,27 @@
apiVersion: k0rdent.mirantis.com/v1alpha1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this template for? It's not used anywhere. Same for the azure/0.0.1.yaml, azure/0.0.2-ingress.yaml, azure/0.0.2.yaml

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't see how the other templates were used in Azure. It looked like these were missing. I can remove if we're sure all the Azure demos work?

@wkonitzer
Copy link
Contributor Author

OK, let me pull out things into different PRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants