Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spec.ports: Required value when running role: Stateless-Aggregator with customConfig #205

Open
jszwedko opened this issue Apr 26, 2022 · 16 comments
Labels
bug Something isn't working

Comments

@jszwedko
Copy link
Member

I attempted to deploy the chart with the following values file:

## Default values for Vector
## See Vector helm documentation to learn more:
## https://vector.dev/docs/setup/installation/package-managers/helm/

# nameOverride -- Override name of app
nameOverride: ""

# fullnameOverride -- Override the full qualified app name
fullnameOverride: ""

# role -- Role for this Vector (possible values: Agent, Aggregator, Stateless-Aggregator)
## Ref: https://vector.dev/docs/setup/deployment/roles/
## Each role is created with the following workloads:
## Agent - DaemonSet
## Aggregator - StatefulSet
## Stateless-Aggregator - Deployment
role: "Stateless-Aggregator"

# rollWorkload -- Add a checksum of the generated ConfigMap to workload annotations
rollWorkload: true

# commonLabels -- Add additional labels to all created resources
commonLabels: {}

## Define the Vector image to use
image:
  # image.repository -- Override default registry + name for Vector
  repository: timberio/vector
  # image.pullPolicy -- Vector image pullPolicy
  pullPolicy: IfNotPresent
  # image.pullSecrets -- Agent repository pullSecret (ex: specify docker registry credentials)
  ## Ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
  pullSecrets: []
  # image.tag -- Vector image tag to use
  # @default -- Chart's appVersion
  tag: ""

# replicas -- Set the number of Pods to create
## Valid for Aggregator and Stateless-Aggregator
replicas: 1

# podManagementPolicy -- Specify the podManagementPolicy for the Aggregator role
## Valid for Aggregator role
podManagementPolicy: OrderedReady

## Create a Secret resource for Vector to use
secrets:
  # secrets.generic -- Each Key/Value will be added to the Secret's data key, each value should be raw and NOT base64 encoded
  ## Any secrets can be provided here, it's commonly used for credentials and other access related values.
  ## NOTE: Don't commit unencrypted secrets to git!
  generic: {}
    # my_variable: "my-secret-value"
    # datadog_api_key: "api-key"
    # awsAccessKeyId: "access-key"
    # awsSecretAccessKey: "secret-access-key"

## Configure a HorizontalPodAutoscaler for Vector
## Valid for Aggregator and Stateless-Aggregator roles
autoscaling:
  # autoscaling.enabled -- Enabled autoscaling for the Aggregator and Stateless-Aggregator
  enabled: false
  # autoscaling.minReplicas -- Minimum replicas for Vector's HPA
  minReplicas: 1
  # autoscaling.maxReplicas -- Maximum replicas for Vector's HPA
  maxReplicas: 10
  # autoscaling.targetCPUUtilizationPercentage -- Target CPU utilization for Vector's HPA
  targetCPUUtilizationPercentage: 80
  # autoscaling.targetMemoryUtilizationPercentage -- (int) Target memory utilization for Vector's HPA
  targetMemoryUtilizationPercentage:
  # autoscaling.customMetric -- Target a custom metric
  customMetric: {}
    #  - type: Pods
    #    pods:
    #      metric:
    #        name: utilization
    #      target:
    #        type: AverageValue
    #        averageValue: 95

podDisruptionBudget:
  # podDisruptionBudget.enabled -- Enable a PodDisruptionBudget for Vector
  enabled: false
  # podDisruptionBudget.minAvailable -- The number of Pods that must still be available after an eviction
  minAvailable: 1
  # podDisruptionBudget.maxUnavailable -- (int) The number of Pods that can be unavailable after an eviction
  maxUnavailable:

rbac:
  # rbac.create -- If true, create and use RBAC resources. Only valid for the Agent role
  create: true

psp:
  # psp.create -- If true, create a PodSecurityPolicy resource. PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25
  ## Intended for use with the Agent role
  create: false

serviceAccount:
  create: true
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::071959437513:role/jesse-test-service-account-role
  automountToken: true

# podAnnotations -- Set annotations on Vector Pods
podAnnotations: {}

# podLabels -- Set labels on Vector Pods
podLabels: {}

# podPriorityClassName -- Set the priorityClassName on Vector Pods
podPriorityClassName: ""

# podSecurityContext -- Allows you to overwrite the default PodSecurityContext for Vector
podSecurityContext: {}

# securityContext -- Specify securityContext on Vector containers
securityContext: {}

# command -- Override Vector's default command
command: []

# args -- Override Vector's default arguments
args:
  - --config-dir
  - "/etc/vector/"

# env -- Set environment variables in Vector containers
## The examples below leverage examples from secrets.generic and assume no name overrides with a Release name of "vector"
env: []
  # - name: MY_VARIABLE
  #   valueFrom:
  #     secretKeyRef:
  #       name: vector
  #       key: my_variable
  # - name: AWS_ACCESS_KEY_ID
  #   valueFrom:
  #     secretKeyRef:
  #       name: vector
  #       key: awsAcessKeyId

# containerPorts -- Manually define Vector's Container ports, overrides automated generation of Container ports
containerPorts: []

# resources -- Set Vector resource requests and limits.
resources: {}
  # requests:
  #   cpu: 200m
  #   memory: 256Mi
  # limits:
  #   cpu: 200m
  #   memory: 256Mi

# updateStrategy -- Customize the updateStrategy used to replace Vector Pods
## Also used for the DeploymentStrategy for Stateless-Aggregators
## Valid options are used depending on the chosen role
## Agent (DaemonSetUpdateStrategy): https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec)
## Aggregator (StatefulSetUpdateStrategy): https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/#StatefulSetSpec
## Stateless-Aggregator (DeploymentStrategy): https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/
updateStrategy: {}
#   type: RollingUpdate
#   rollingUpdate:
#     maxUnavailable: 1

# terminationGracePeriodSeconds -- Override Vector's terminationGracePeriodSeconds
terminationGracePeriodSeconds: 60

# nodeSelector -- Allow Vector to be scheduled on selected nodes
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}

# tolerations -- Allow Vector to schedule on tainted nodes
tolerations: []

# affinity -- Allow Vector to schedule using affinity rules
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}

## Configuration for Vector's Service
service:
  # service.enabled -- If true, create and use a Service resource
  enabled: true
  # service.type -- Set type of Vector's Service
  type: "ClusterIP"
  # service.annotations -- Set annotations on Vector's Service
  annotations: {}
  # service.topologyKeys -- Specify the topologyKeys field on Vector's Service spec
  ## Ref: https://kubernetes.io/docs/concepts/services-networking/service-topology/#using-service-topology
  topologyKeys: []
  #   - "kubernetes.io/hostname"
  #   - "topology.kubernetes.io/zone"
  #   - "topology.kubernetes.io/region"
  #   - "*"
  # service.ports -- Manually set Service ports, overrides automated generation of Service ports
  ports: []

## Configuration for Vector's Ingress
ingress:
  # ingress.enabled -- If true, create and use an Ingress resource
  enabled: false
  # ingress.className -- Specify the ingressClassName, requires Kubernetes >= 1.18
  ## Ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
  className: ""
  # ingress.annotations -- Set annotations on the Ingress
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  # ingress.hosts -- Configure the hosts and paths for the Ingress
  hosts: []
  #  - host: chart-example.local
  #    paths:
  #      - path: /
  #        pathType: ImplementationSpecific
  #        # Specify the port name or number on the Service
  #        # Using name requires Kubernetes >=1.19
  #        port:
  #          name: ""
  #          number: ""
  # ingress.tls -- Configure TLS for the Ingress
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

# existingConfigMaps -- List of existing ConfigMaps for Vector's configuration instead of creating a new one, if used requires dataDir to be set.
# Additionally, containerPorts and service.ports should be specified based on your supplied configuration
## If set, this parameter takes precedence over customConfig and the chart's default configs
existingConfigMaps: []

# dataDir -- Specify the path for Vector's data, only used when existingConfigMaps are used
dataDir: ""

# customConfig -- Override Vector's default configs, if used **all** options need to be specified
## This section supports using helm templates to populate dynamic values
## Ref: https://vector.dev/docs/reference/configuration/
customConfig:
  sources:
    aws_s3:
      type: aws_s3
      region: us-east-2
      sqs:
        delete_message: true
        poll_secs: 15
        queue_url: https://sqs.us-east-1.amazonaws.com/071959437513/jesse-test-queue
  sinks:
    stdout:
      type: console
      inputs: [aws_s3]
      encoding:
        codec: json

# extraVolumes -- Additional Volumes to use with Vector Pods
extraVolumes: []

# extraVolumeMounts -- Additional Volume to mount into Vector Containers
extraVolumeMounts: []

# initContainers -- Init Containers to be added to the Vector Pod
initContainers: []

## Configuration for Vector's data persistence
persistence:
  # persistence.enabled -- If true, create and use PersistentVolumeClaims
  enabled: false
  # persistence.existingClaim -- Name of an existing PersistentVolumeClaim to use
  ## Valid for Aggregator role
  existingClaim: ""
  # persistence.storageClassName -- Specifies the storageClassName for PersistentVolumeClaims
  ## Valid for Aggregator role
  # storageClassName: default

  # persistence.accessModes -- Specifies the accessModes for PersistentVolumeClaims
  ## Valid for Aggregator role
  accessModes:
    - ReadWriteOnce
  # persistence.size -- Specifies the size of PersistentVolumeClaims
  ## Valid for Aggregator role
  size: 10Gi
  # persistence.finalizers -- Specifies the finalizers of PersistentVolumeClaims
  ## Valid for Aggregator role
  finalizers:
    - kubernetes.io/pvc-protection
  # persistence.selectors -- Specifies the selectors for PersistentVolumeClaims
  ## Valid for Aggregator role
  selectors: {}

  hostPath:
    # persistence.hostPath.path -- Override path used for hostPath persistence
    ## Valid for Agent role, persistence always used for Agent role
    path: "/var/lib/vector"

# dnsPolicy -- Specify DNS policy for Vector Pods
## Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
dnsPolicy: ClusterFirst

# dnsConfig -- Specify DNS configuration options for Vector Pods
## Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config
dnsConfig: {}
  # nameservers:
  #   - 1.2.3.4
  # searches:
  #   - ns1.svc.cluster-domain.example
  #   - my.dns.search.suffix
  # options:
  #   - name: ndots
  #     value: "2"
  #   - name: edns0

# livenessProbe -- Override default liveness probe settings, if customConfig is used requires customConfig.api.enabled true
## Requires Vector's API to be enabled
livenessProbe: {}
  # httpGet:
  #   path: /health
  #   port: api

# readinessProbe -- Override default readiness probe settings, if customConfig is used requires customConfig.api.enabled true
## Requires Vector's API to be enabled
readinessProbe: {}
  # httpGet:
  #   path: /health
  #   port: api

## Configure a PodMonitor for Vector
## Requires the PodMonitor CRD to be installed
podMonitor:
  # podMonitor.enabled -- If true, create a PodMonitor for Vector
  enabled: false
  # podMonitor.jobLabel -- Override the label to retrieve the job name from
  jobLabel: app.kubernetes.io/name
  # podMonitor.port -- Override the port to scrape
  port: prom-exporter
  # podMonitor.path -- Override the path to scrape
  path: /metrics
  # podMonitor.relabelings -- RelabelConfigs to apply to samples before scraping
  ## Ref: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
  relabelings: []
  # podMonitor.metricRelabelings -- MetricRelabelConfigs to apply to samples before ingestion
  ## Ref: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs
  metricRelabelings: []
  # podMonitor.additionalLabels -- Adds additional labels to the PodMonitor
  additionalLabels: {}
  # podMonitor.honorLabels -- If true, honor_labels is set to true in scrape config
  ## Ref: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
  honorLabels: false
  # podMonitor.honorTimestamps -- If true, honor_timestamps is set to true in scrape config
  ## Ref: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
  honorTimestamps: true

## Optional built-in HAProxy load balancer
haproxy:
  # haproxy.enabled -- If true, create a HAProxy load balancer
  enabled: false

  ## Define the HAProxy image to use
  image:
    # haproxy.image.repository -- Override default registry + name for HAProxy
    repository: haproxytech/haproxy-alpine
    # haproxy.image.pullPolicy -- HAProxy image pullPolicy
    pullPolicy: IfNotPresent
    # haproxy.image.pullSecrets -- HAProxy repository pullSecret (ex: specify docker registry credentials)
    ## Ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
    pullSecrets: []
    # haproxy.image.tag -- HAProxy image tag to use
    tag: "2.4.4"

  # haproxy.rollWorkload -- Add a checksum of the generated ConfigMap to the HAProxy Deployment
  rollWorkload: true

  # haproxy.replicas -- Set the number of HAProxy Pods to create
  replicas: 1

  serviceAccount:
    # haproxy.serviceAccount.create -- If true, create a HAProxy ServiceAccount
    create: true
    # haproxy.serviceAccount.annotations -- Annotations to add to the HAProxy ServiceAccount
    annotations: {}
    # haproxy.serviceAccount.name -- The name of the HAProxy ServiceAccount to use.
    ## If not set and create is true, a name is generated using the fullname template
    name:
    # haproxy.serviceAccount.automountToken -- Automount API credentials for the HAProxy ServiceAccount
    automountToken: true

  # haproxy.strategy -- Customize the strategy used to replace HAProxy Pods
  ## Ref: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/
  strategy: {}
    # rollingUpdate:
    #   maxSurge: 25%
    #   maxUnavailable: 25%
    # type: RollingUpdate

  # haproxy.terminationGracePeriodSeconds -- Override HAProxy's terminationGracePeriodSeconds
  terminationGracePeriodSeconds: 60

  # haproxy.podAnnotations -- Set annotations on HAProxy Pods
  podAnnotations: {}

  # haproxy.podLabels -- Set labels on HAProxy Pods
  podLabels: {}

  # haproxy.podPriorityClassName -- Set the priorityClassName on HAProxy Pods
  podPriorityClassName: ""

  # haproxy.podSecurityContext -- Allows you to overwrite the default PodSecurityContext for HAProxy
  podSecurityContext: {}
    # fsGroup: 2000

  # haproxy.securityContext -- Specify securityContext on HAProxy containers
  securityContext: {}
    # capabilities:
    #   drop:
    #   - ALL
    # readOnlyRootFilesystem: true
    # runAsNonRoot: true
    # runAsUser: 1000

  # haproxy.containerPorts -- Manually define HAProxy's Container ports, overrides automated generation of Container ports
  containerPorts: []

  ## HAProxy's Service configuration
  service:
    # haproxy.service.type -- Set type of HAProxy's Service
    type: ClusterIP
    # haproxy.service.annotations -- Set annotations on HAProxy's Service
    annotations: {}
    # haproxy.service.topologyKeys -- Specify the topologyKeys field on HAProxy's Service spec
    ## Ref: https://kubernetes.io/docs/concepts/services-networking/service-topology/#using-service-topology
    topologyKeys: []
    #   - "kubernetes.io/hostname"
    #   - "topology.kubernetes.io/zone"
    #   - "topology.kubernetes.io/region"
    #   - "*"
    # haproxy.service.ports -- Manually set HAPRoxy's Service ports, overrides automated generation of Service ports
    ports: []

  # haproxy.existingConfigMap -- Use this existing ConfigMap for HAProxy's configuration instead of creating a new one.
  # Additionally, haproxy.containerPorts and haproxy.service.ports should be specified based on your supplied configuration
  ## If set, this parameter takes precedence over customConfig and the chart's default configs
  existingConfigMap: ""

  # haproxy.customConfig -- Override HAProxy's default configs, if used **all** options need to be specified.
  # This parameter supports using Helm templates to insert values dynamically
  ## By default this chart will parse Vector's configuration from customConfig to generate HAProxy's config, this generated config
  ## can be overwritten with haproxy.customConfig
  customConfig: ""

  # haproxy.extraVolumes -- Additional Volumes to use with HAProxy Pods
  extraVolumes: []

  # haproxy.extraVolumeMounts -- Additional Volume to mount into HAProxy Containers
  extraVolumeMounts: []

  # haproxy.initContainers -- Init Containers to be added to the HAProxy Pod
  initContainers: []

  ## Configure a HorizontalPodAutoscaler for HAProxy
  autoscaling:
    # haproxy.autoscaling.enabled -- Enabled autoscaling for HAProxy
    enabled: false
    # haproxy.autoscaling.minReplicas -- Minimum replicas for HAProxy's HPA
    minReplicas: 1
    # haproxy.autoscaling.maxReplicas -- Maximum replicas for HAProxy's HPA
    maxReplicas: 10
    # haproxy.autoscaling.targetCPUUtilizationPercentage -- Target CPU utilization for HAProxy's HPA
    targetCPUUtilizationPercentage: 80
    # haproxy.autoscaling.targetMemoryUtilizationPercentage -- (int) Target memory utilization for HAProxy's HPA
    targetMemoryUtilizationPercentage:
    # haproxy.autoscaling.customMetric -- Target a custom metric
    customMetric: {}
      #  - type: Pods
      #    pods:
      #      metric:
      #        name: utilization
      #      target:
      #        type: AverageValue
      #        averageValue: 95

  # haproxy.resources -- Set HAProxy resource requests and limits.
  resources: {}
    # limits:
    #   cpu: 100m
    #   memory: 128Mi
    # requests:
    #   cpu: 100m
    #   memory: 128Mi

  # haproxy.livenessProbe -- Override default HAProxy liveness probe settings
  livenessProbe:
    tcpSocket:
      port: 1024

  # haproxy.readinessProbe -- Override default HAProxy readiness probe settings
  readinessProbe:
    tcpSocket:
      port: 1024

  # haproxy.nodeSelector -- Allow HAProxy to be scheduled on selected nodes
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  nodeSelector: {}

  # haproxy.tolerations -- Allow HAProxy to schedule on tainted nodes
  tolerations: []

  # haproxy.affinity -- Allow HAProxy to schedule using affinity rules
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  affinity: {}

To which it gave me:

Error: INSTALLATION FAILED: Service "vector" is invalid: spec.ports: Required value
@jszwedko jszwedko added the bug Something isn't working label Apr 26, 2022
@spencergilbert spencergilbert self-assigned this Apr 27, 2022
@spencergilbert
Copy link
Contributor

Ah - so we walk the customConfig but with this config there are no components that actually listen on a port. So when we go to render the the service resource/s we have no ports to render, and ports is required (as the error states).

I'll think about it a bit and try and sneak this into the 0.10.2 release, maybe just checking to see if we have any ports and letting that determine if we create the service 🤔

@jszwedko
Copy link
Member Author

I think that makes sense. For context, I was trying to deploy Vector to just consume from AWS S3 so it had no need for any listeners. I think other "pull based deployments" of Vector (like consuming from Kafka) would be similar.

@spencergilbert
Copy link
Contributor

I think that makes sense. For context, I was trying to deploy Vector to just consume from AWS S3 so it had no need for any listeners. I think other "pull based deployments" of Vector (like consuming from Kafka) would be similar.

Yep! Definitely reasonable, looking at it though... they way I've set things up it's not super easy to toggle it based on the "contents" of the parsed ports. I don't think this will make it into 0.10.2. As a workaround service.enabled=false sidesteps the issue.

@spencergilbert spencergilbert removed their assignment Jul 13, 2022
@patsevanton
Copy link

Same error when install vector helm chart with customConfig:

helm repo add vector https://helm.vector.dev
helm repo update
kubectl create namespace vector || true
helm install --wait vector vector/vector -n vector --set "customConfig.sinks.sink_to_loki.endpoint=http://loki-loki-distributed-gateway.loki"
namespace/vector created
Error: INSTALLATION FAILED: Service "vector" is invalid: spec.ports: Required value

@spencergilbert
Copy link
Contributor

Same error when install vector helm chart with customConfig:

helm repo add vector https://helm.vector.dev
helm repo update
kubectl create namespace vector || true
helm install --wait vector vector/vector -n vector --set "customConfig.sinks.sink_to_loki.endpoint=http://loki-loki-distributed-gateway.loki"
namespace/vector created
Error: INSTALLATION FAILED: Service "vector" is invalid: spec.ports: Required value

One immediate thing here is that --set isn't going to create a valid Vector configuration file, as the entire file would just be

sinks:
  sink_to_loki:
    endpoint: "http://loki-loki-distributed-gateway.loki"

The cause of this is how the Service resources are generated, by default we'll parse the customConfig and look for anything that appears to need a port to listen on and add that to the resource. Both the original configuration and your sample created configurations with no ports to listen on.

The logic doesn't handle not rendering the Service if there were no ports parsed from the config, and Kubernetes doesn't allow for a Service without ports defined.

I recently looked at this but couldn't pull together anything that would work in an automatic fashion, perhaps we should just document this, or try and add a fail function if there are no ports with a helpful message.

@lapwat
Copy link

lapwat commented Feb 24, 2023

The vector.ports variable used by the Helm Chart is generated in _helpers.tpl.

It uses generatePort helper to generate the port. Where is this helper defined ? I cannot find it in the code.

My config is

customConfig:
  sources:
    kubernetes:
      type: kubernetes_logs
  sinks:
    elasticsearch:
      type: elasticsearch
      endpoints:
        - https://elasticsearch-master:9200
      auth:
        strategy: basic
        user: elastic
        password: elastic-password
      inputs:
        - kubernetes

But vector.ports stays empty.

What would be a "valid" customConfig that would trigger the ports to be generated ?

@spencergilbert
Copy link
Contributor

generatePorts is defined in the same file here. A config that will automatically generate valid ports is one that requires requests into Vector.

Your example config has no such components, kubernetes_logs is local and elasticsearch is egress only - so there are no ports to define. The services should be disabled in this case.

@lapwat
Copy link

lapwat commented Feb 24, 2023

Thanks! I was tweaking service.ports and did not even notice there was a service.enabled key !

@paymog
Copy link

paymog commented May 18, 2023

I also see this issue when just doing an Agent deployment on version 0.21.1

@spencergilbert
Copy link
Contributor

I believe I solved this while working on a different project, I'll see if I can port those changes over to resolve this issue 👍

@seaadevil
Copy link

seaadevil commented Oct 9, 2023

Hi, I have the same error when using Vector with Role Agent, My Loki is set up in the parent cluster, and I want to set up vector agents on child clusters.
Error: 1 error occurred:
* Service "vector" is invalid: spec.ports: Required value

customConfig:
sinks:
loki:
type: loki
inputs: [kubernetes_logs]
compression: snappy
endpoint: https://loki.mydomain.com"
# labels:
# '"pod_labels_*"': "{{ kubernetes.pod_labels }}"
# source: vector
# "{{ event_field }}": "{{ some_other_event_field }}"
out_of_order_action: drop
path: /loki/api/v1/push
remove_timestamp: true

  @spencergilbert  Can you please share your solution? 

@lapwat
Copy link

lapwat commented Oct 9, 2023

@seaadevil Did you set service.enable to true?

@seaadevil
Copy link

I disabled the service, and I don't have the error

@Momotoculteur
Copy link

Hey,
i'm using this chart and get also this error. Any update on that ? :)

@dsmith3197
Copy link
Contributor

We haven't had a chance to fix this. If you run into the spec.ports: Required value error message, you should set service.enabled=false to avoid creating the kubernetes service altogether.

@dsmith3197
Copy link
Contributor

If anyone is interested in updating this, the fix would be similar to what is done here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

8 participants