Replies: 6 comments 7 replies
-
cc: @dimitriscruz @alenkacz and @RafalKorepta |
Beta Was this translation helpful? Give feedback.
-
I added this to my EKS Cluster Security Group. Also note that I reinstalled and so the NodePort is now 31200 and I changed my IP address so that I would not be publishing it here: I still get the same error message as before: rpk cluster info --brokers 34.213.99.26:31200,54.191.107.49:31200,35.81.78.211:31200 Here is the command again with the -v option: Failed to connect to broker 35.81.78.211:31200: dial tcp 35.81.78.211:31200: i/o timeout client/metadata got error from broker -1 while fetching metadata: dial tcp 35.81.78.211:31200: i/o timeout client/metadata not fetching metadata from broker 54.191.107.49:31200 as we would go past the metadata timeout client/metadata skipping last retries as we would go past the metadata timeout |
Beta Was this translation helpful? Give feedback.
-
It is notable that this nodeport is also being used by the Service, and that the service spec does not list any external IPs. Should I try installing the Cluster without the External: True ? Also does it matter that I have TLS turned on? (In production, I would need TLS for talking across the Internet.) |
Beta Was this translation helpful? Give feedback.
-
kubectl -n redpanda get clusters.redpanda.vectorized.io tbn-cluster -o=jsonpath='{.status.nodes.external}' |
Beta Was this translation helpful? Give feedback.
-
I also had issues using kubectl port-forward with rpk: rpk cluster info --brokers 127.0.0.1:9092 -v Connected to broker at 127.0.0.1:9092 (unregistered) client/brokers registered new broker #0 at tbn-cluster-two-0.tbn-cluster-two.redpanda.svc.cluster.local.:9092 Failed to connect to broker tbn-cluster-two-0.tbn-cluster-two.redpanda.svc.cluster.local.:9092: dial tcp: lookup tbn-cluster-two-0.tbn-cluster-two.redpanda.svc.cluster.local. on 127.0.0.53:53: no such host Closing Client |
Beta Was this translation helpful? Give feedback.
-
@rfinner Your spec has TLS enabled for the internal port and disabled for the external port:
The current combination in your spec would not work as expected with our existing implementation. If you looked at the generated Having said that, I see from above that there might be further issues, but I wanted to note this down. |
Beta Was this translation helpful? Give feedback.
-
I am running vectorized/redpanda-operator:latest (installed 2 days ago) in an AWS EKS Cluster that has public and private subnets. It appears that the external flag for my test cluster "chat-with-me" is creating a nodeport in the private subnet and there is no external public IP created. I was wondering if this is correct and what I could or should do in order to access from rpk remotely. Here is my test
I think the issue might be in this part of the Service yaml generated: It should have "spec:
externalIPs:
1.2.3.4
4.5.6.7
externalTrafficPolicy: Cluster
ports:
nodePort: 30001 #will give you a random 3XXXX port"
instead of externalTrafficPolicy: Local
Beta Was this translation helpful? Give feedback.
All reactions