Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The kubeconfig generated is missing the region argument #1038

Open
bryantbiggs opened this issue Feb 14, 2024 · 4 comments
Open

The kubeconfig generated is missing the region argument #1038

bryantbiggs opened this issue Feb 14, 2024 · 4 comments
Labels
impact/usability Something that impacts users' ability to use the product easily and intuitively kind/bug Some behavior is incorrect or out of spec

Comments

@bryantbiggs
Copy link

What happened?

I am trying to construct a Kubernetes provider thats suitable for deploying Helm charts onto an EKS cluster. However, I am getting a cluster not found error because the kubeconfig generated by the Pulumi EKS provider does not contain the region flag

Example

Use this projects https://github.com/pulumi/pulumi-eks/tree/master/examples/aws-go-eks-helloworld example and inspect the generated kubeconfig - you will see it does not contain the --region <region> argument

Output of pulumi about

x

Additional context

This is what the aws eks update-kubeconfig --name <name> command generates

# truncated for brevity
exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - <region>
      - eks
      - get-token
      - --cluster-name
      - <name>
      - --output
      - json
      command: aws

And this is what the Pulumi kubeconfig generates:

# truncated for brevity
exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - eks
      - get-token
      - --cluster-name
      - <name>
      command: aws

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@bryantbiggs bryantbiggs added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels Feb 14, 2024
@bryantbiggs
Copy link
Author

potentially related to #896 (comment) - there are no good examples that show how to generate a provider that can be passed to Helm or Kubernetes resources correctly

@mjeffryes mjeffryes added impact/usability Something that impacts users' ability to use the product easily and intuitively and removed needs-triage Needs attention from the triage team labels Feb 14, 2024
@mjeffryes
Copy link
Contributor

Thanks for reporting this @bryantbiggs. I suspect we're not exporting the region because we pick it up from the environment or config so a Pulumi K8s program doesn't need the region in the kubeconfig, but it does seem like a meaningful omission if the user intends to use the config with other tools!

To your second comment: It's true that there's not a good in-repo example for generating a k8s provider from an eks cluster resource, but you can find some instructions in our docs (eg. third code block in this section: https://www.pulumi.com/docs/clouds/aws/guides/eks/#provisioning-a-new-eks-cluster). There's also examples of this in our pulumi/examples repo (https://github.com/pulumi/examples/blob/master/aws-ts-eks-distro/index.ts) We're thinking of renaming the examples folder in the provider repos since these are actually used for e2e testing, not really as examples.

@bryantbiggs
Copy link
Author

because we pick it up from the environment or config so a Pulumi K8s program doesn't need the region in the kubeconfig, but it does seem like a meaningful omission if the user intends to use the config with other tools!

I don't think this is quite accurate. This is delegating the token retrieval to the awscli, so either you need to explicitly tell the CLI command which region to query the cluster, or you leave that to the normal awscli lookup options. Its this second part which is worrisome from an IaC perspective because it needs to be reproducible across a number of different contexts (executing pulumi up locally, from within a CI process, etc.). My default credentials profile might use us-east-1 as the default region, but how do I tell Pulumi to connect to the cluster it created in us-west-2? Or similarly, I may have AWS_DEFAULT_REGION=eu-west-1 set for some odd reason. I think passing the region from Pulumi down to the kubeconfig is the only way users can ensure the right cluster is queried - there are several ways this can be accomplished, the key point being a direct, explicit relationship between the Pulumi context and the command arguments

@gunzy83
Copy link

gunzy83 commented Apr 4, 2024

@bryantbiggs we ran into this problem early on when I began using discrete AWS provider objects that pointed to a regionless profile (inside our pulumi project) and the region is configured via our standard config variables so multi-region becomes a breeze. I read through the code and found exactly what you have, the region is not passed through and the JSON kubeconfig object does not have enough information to generate credentials when deploying k8s resources post cluster creation (eg namespaces, cluster roles, bindings and in our case Teleport deployment, all further deploys are done through that).

Our solution used transformations (code is a little sloppy but we want to get rid of it so we are leaving it as is while it is working):

export const ensureKubeConfigHasAwsRegion: pulumi.ResourceTransformation = (
  args: pulumi.ResourceTransformationArgs
): pulumi.ResourceTransformationResult | undefined => {
  if (args.type === 'pulumi:providers:kubernetes' || args.type === 'eks:index:VpcCni') {
    // eslint-disable-next-line @typescript-eslint/no-explicit-any
    const kubeConfig: pulumi.Output<any> = args.props['kubeconfig']
    const newKubeConfig = addRegionToKubeConfig(kubeConfig)
    args.props['kubeconfig'] = newKubeConfig
    return {
      props: args.props,
      opts: args.opts,
    }
  }
  return undefined
}

export const addRegionToKubeConfig = (kubeConfig: pulumi.Output<any>) => {
  const newKubeConfig = pulumi.all([kubeConfig, awsRegion]).apply(([contents, region]) => {
    let configObj: any
    if (typeof contents == 'object') {
      configObj = JSON.parse(JSON.stringify(contents))
    } else {
      configObj = JSON.parse(contents)
    }
    const envs = configObj['users'][0]['user']['exec']['env']
    envs.push({
      name: 'AWS_REGION',
      value: region,
    })
    configObj['users'][0]['user']['exec']['env'] = envs
    return JSON.stringify(configObj)
  })
  return newKubeConfig
}

Then apply it where required:

const eksCluster = new eks.Cluster(
  'eks-cluster',
  {
    ...clusterConfig,
  },
  { parent: this, provider: awsProvider, transformations: [ensureKubeConfigHasAwsRegion] }
)

Separate function needed for output (we are deprecating this in our stack):

export = {
  'kube-config': addRegionToKubeConfig(eksCluster.kubeConfig),
}

We are likely going to remove the pulumi/eks parent resources and just manage the underlying pulumi/aws and k8s resources ourselves going forward (this issue is only one of many reasons why). Hope this helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
impact/usability Something that impacts users' ability to use the product easily and intuitively kind/bug Some behavior is incorrect or out of spec
Projects
None yet
Development

No branches or pull requests

3 participants