Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"No data for selected query" in Explore Tempo menu #4200

Open
eunhwa-park opened this issue Oct 17, 2024 · 10 comments
Open

"No data for selected query" in Explore Tempo menu #4200

eunhwa-park opened this issue Oct 17, 2024 · 10 comments
Labels
type/docs Improvements or additions to documentation

Comments

@eunhwa-park
Copy link

Hi, I’m trying to use the Explore Tempo plugin. I've connected the Tempo data source, and traces are visible, but in the RED metrics or other sections, it shows "No data for selected query." There are no errors in the logs of each pod either.

  • Tempo App version : 2.6.0
  • Tempo Chart : tempo-distributed-1.18.4

What could be misconfigured? Below is the values.yaml file.

traces:
  otlp:
    http:
      enabled: true
    grpc:
      enabled: true
tempo:
  structuredConfig:
    ingester:
      lifecycler:
        ring:
          replication_factor: 3
      max_block_bytes: 104857600
      max_block_duration: 10m
      complete_block_timeout: 15m

storage:
  trace:
    backend: s3
    s3:
      region: ap-northeast-2
      bucket: owl-tempo
      endpoint: s3.ap-northeast-2.amazonaws.com
      insecure: true
    search:
      cache_control:
        footer: true
    pool:
      max_workers: 400
      queue_depth: 20000
    wal:
      path: /var/tempo/wal

distributor:
  replicas: 2
  config:
    log_received_spans:
      enabled: true

ingester:
  replicas: 3
  persistence:
    enabled: true
    size: 10Gi

serviceAccount:
  create: true
  name: "tempo"
  annotations:
    "eks.amazonaws.com/role-arn": "arn:aws:iam::xxxxx:role/xxxx-tempo-irsa"
  automountServiceAccountToken: true

global_overrides:
  metrics_generator_processors:
    - service-graphs
    - span-metrics
    - local-blocks
metricsGenerator:
  enabled: true
  config:
    processor:
      local_blocks:
        flush_to_storage: true

Image
Image

Thank you!

@joe-elliott
Copy link
Member

joe-elliott commented Oct 17, 2024

If you run a basic traceql metrics query in normal explore do you get a result? something like:

{} | rate()

@eunhwa-park
Copy link
Author

No,, when I query only {}, I get a result, but when I add {} | rate(), no result is returned.
Is there any additional configuration needed for the rate function?

@joe-elliott
Copy link
Member

Here is a doc about configuring tempo for

https://grafana.com/docs/tempo/latest/operations/traceql-metrics/#before-you-begin

and i think a few details for 2.6 are missing that can be found in the release notes (cc @knylander-grafana)

https://grafana.com/docs/tempo/latest/release-notes/v2-6/#operational-change-for-traceql-metrics

unfortunately, i don't know much about helm. if you provide the generated tempo config, i can probably give better advice.

@eunhwa-park
Copy link
Author

eunhwa-park commented Oct 21, 2024

I referenced the shared page and added it to metricsGenerator, but TraceQL still doesn't work properly. There is no /var/tempo/traces path in the tempo-metrics-generator service pod. Could this be due to incorrect configuration?

$ tree /var/tempo/
/var/tempo/
└── wal
    └── blocks

I am sharing the Tempo configuration. If there are any missing parts, please let me know. Thanks a lot!

traces:
  otlp:
    http:
      enabled: true
    grpc:
      enabled: true
tempo:
  structuredConfig:
    ingester:
      lifecycler:
        ring:
          replication_factor: 3
      max_block_bytes: 104857600
      max_block_duration: 10m
      complete_block_timeout: 15m

storage:
  trace:
    backend: s3
    s3:
      region: ap-northeast-2
      bucket: xxx-tempo
      endpoint: s3.ap-northeast-2.amazonaws.com
      insecure: true
    search:
      cache_control:
        footer: true
    pool:
      max_workers: 400
      queue_depth: 20000
    wal:
      path: /var/tempo/wal
    local:
      path: /var/tempo/traces

distributor:
  replicas: 2
  config:
    log_received_spans:
      enabled: true

ingester:
  replicas: 3
  persistence:
    enabled: true
    size: 10Gi

serviceAccount:
  create: true
  name: "tempo"
  annotations:
    "eks.amazonaws.com/role-arn": "arn:aws:iam::xxxx:role/xxx-tempo-irsa"
  automountServiceAccountToken: true

global_overrides:
  metrics_generator_processors:
    - service-graphs
    - span-metrics
    - local-blocks
metricsGenerator:
  enabled: true
  config:
    processor:
      local_blocks:
        flush_to_storage: true
        filter_server_spans: false
    storage:
      path: /var/tempo/wal
    traces_storage:
      path: /var/tempo/traces

@knylander-grafana
Copy link
Contributor

knylander-grafana commented Oct 21, 2024

Here is a doc about configuring tempo for

https://grafana.com/docs/tempo/latest/operations/traceql-metrics/#before-you-begin

and i think a few details for 2.6 are missing that can be found in the release notes (cc @knylander-grafana)

I've updated the configuration docs to add the 2.6 info here.

If you find anything else missing in the docs, please let me know and we'll get them updated.

@knylander-grafana knylander-grafana added the type/docs Improvements or additions to documentation label Oct 21, 2024
@joe-elliott
Copy link
Member

I think you're sharing the values.yaml file from the helm chart, but I'm not very familiar with that. If you could share the actual rendered Tempo configmap I would probably be able to find the misconfiguration.

@eunhwa-park
Copy link
Author

Thanks for your support. Here is my tempo-config configmap yaml.

tempo-config-cm.txt

@joe-elliott
Copy link
Member

Not seeing anything obvious in your config. It looks like it should work. Can you step through this metrics gen troubleshooting to make sure the generators are receiving and processing spans:

https://grafana.com/docs/tempo/latest/troubleshooting/metrics-generator/

@MikeHsuOpennet
Copy link

MikeHsuOpennet commented Nov 5, 2024

same here when i upgrade to 2.6.1 from 2.5.0 and i notice it could just return the data in 30 mins.
Image

Image

cache:
  caches:
  - memcached:
      consistent_hash: true
      host: 'tempo-memcached'
      service: memcached-client
      timeout: 500ms
    roles:
    - parquet-footer
    - bloom
    - frontend-search
compactor:
  compaction:
    block_retention: 336h
    compacted_block_retention: 1h
    compaction_cycle: 30s
    compaction_window: 1h
    max_block_bytes: 107374182400
    max_compaction_objects: 6000000
    max_time_per_tenant: 5m
    retention_concurrency: 10
    v2_in_buffer_bytes: 5242880
    v2_out_buffer_bytes: 20971520
    v2_prefetch_traces_count: 1000
  ring:
    kvstore:
      store: memberlist
distributor:
  receivers:
    otlp:
      protocols:
        grpc:
          endpoint: 0.0.0.0:4317
        http:
          endpoint: 0.0.0.0:4318
  ring:
    kvstore:
      store: memberlist
ingester:
  flush_all_on_shutdown: true
  lifecycler:
    ring:
      kvstore:
        store: memberlist
      replication_factor: 1
    tokens_file_path: /var/tempo/tokens.json
memberlist:
  abort_if_cluster_join_fails: false
  bind_addr: []
  bind_port: 7946
  gossip_interval: 1s
  gossip_nodes: 2
  gossip_to_dead_nodes_time: 30s
  join_members:
  - dns+tempo-gossip-ring:7946
  leave_timeout: 5s
  left_ingesters_timeout: 5m
  max_join_backoff: 1m
  max_join_retries: 10
  min_join_backoff: 1s
  node_name: ""
  packet_dial_timeout: 5s
  packet_write_timeout: 5s
  pull_push_interval: 30s
  randomize_node_name: true
  rejoin_interval: 0s
  retransmit_factor: 2
  stream_timeout: 10s
metrics_generator:
  metrics_ingestion_time_range_slack: 30s
  processor:
    service_graphs:
      dimensions: []
      histogram_buckets:
      - 0.1
      - 0.2
      - 0.4
      - 0.8
      - 1.6
      - 3.2
      - 6.4
      - 12.8
      max_items: 10000
      wait: 10s
      workers: 10
    span_metrics:
      dimensions: []
      histogram_buckets:
      - 0.002
      - 0.004
      - 0.008
      - 0.016
      - 0.032
      - 0.064
      - 0.128
      - 0.256
      - 0.512
      - 1.02
      - 2.05
      - 4.1
  registry:
    collection_interval: 60s
    external_labels: {}
    stale_duration: 15m
  ring:
    kvstore:
      store: memberlist
  storage:
    path: /var/tempo/wal
    remote_write:
    - send_exemplars: true
      url: http://test.svc.cluster.local:9090/api/v1/write
    remote_write_add_org_id_header: true
    remote_write_flush_deadline: 1m
    wal: null
  traces_storage:
    path: /var/tempo/traces
multitenancy_enabled: false
overrides:
  per_tenant_override_config: /runtime-config/overrides.yaml
querier:
  frontend_worker:
    frontend_address: tempo-query-frontend-discovery:9095
  max_concurrent_queries: 20
  search:
    external_backend: null
    external_endpoints: []
    external_hedge_requests_at: 8s
    external_hedge_requests_up_to: 2
    prefer_self: 10
    query_timeout: 600s
  trace_by_id:
    query_timeout: 300s
query_frontend:
  max_outstanding_per_tenant: 2000
  max_retries: 2
  metrics:
    max_duration: 24h
  search:
    concurrent_jobs: 1000
    target_bytes_per_job: 104857600
  trace_by_id:
    query_shards: 50
server:
  grpc_server_max_recv_msg_size: 10485760
  grpc_server_max_send_msg_size: 10485760
  http_listen_port: 3100
  http_server_read_timeout: 600s
  http_server_write_timeout: 600s
  log_format: logfmt
  log_level: info
storage:
  trace:
    backend: s3

overrides:
  '*':
    global:
      max_bytes_per_trace: 2000000
    ingestion:
      burst_size_bytes: 200000000
      rate_limit_bytes: 150000000
    metrics_generator:
      processors:
      - service-graphs
      - span-metrics
      - local-blocks

@MikeHsuOpennet
Copy link

i think it's fixed after set the config from https://grafana.com/docs/tempo/latest/release-notes/v2-6/#operational-change-for-traceql-metrics which @joe-elliott mentioned.

metricsGenerator:
  config:
    processor:
      local_blocks:
        filter_server_spans: false
        flush_to_storage: true

big thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/docs Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

4 participants