Skip to content

Commit

Permalink
[DOC] Fix broken links and linter errors (#3626)
Browse files Browse the repository at this point in the history
* Fix broken links and linter errors

* Apply suggestions from code review
  • Loading branch information
knylander-grafana committed Apr 30, 2024
1 parent a0f1546 commit d94e1f2
Show file tree
Hide file tree
Showing 3 changed files with 37 additions and 25 deletions.
6 changes: 4 additions & 2 deletions docs/sources/tempo/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,12 @@ cards:

Distributed tracing visualizes the lifecycle of a request as it passes through a set of applications.

Tempo is cost-efficient, and only requires an object storage to operate. Tempo is deeply integrated with Grafana, Mimir, Prometheus, and Loki. You can use Tempo with open source tracing protocols, including Jaeger, Zipkin, or OpenTelemetry.
Tempo is cost-efficient and only requires an object storage to operate.
Tempo is deeply integrated with Grafana, Mimir, Prometheus, and Loki.
You can use Tempo with open source tracing protocols, including Jaeger, Zipkin, or OpenTelemetry.
<p align="center"><img src="getting-started/assets/trace_custom_metrics_dash.png" alt="Trace visualization in Grafana "></p>

Tempo integrates well with a number of existing open source tools:
Tempo integrates well with a number of open source tools:

- **Grafana** ships with native support using the built-in [Tempo data source](/docs/grafana/latest/datasources/tempo/).
- **Grafana Loki**, with its powerful query language LogQL v2 lets you filter requests that you care about, and jump to traces using the [Derived fields support in Grafana](/docs/grafana/latest/datasources/loki/#derived-fields).
Expand Down
39 changes: 23 additions & 16 deletions docs/sources/tempo/configuration/grafana-agent/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,20 +10,20 @@ aliases:

{{< docs/shared source="alloy" lookup="agent-deprecation.md" version="next" >}}

The [Grafana Agent](https://github.com/grafana/agent) is a telemetry
[Grafana Agent](https://github.com/grafana/agent) is a telemetry
collector for sending metrics, logs, and trace data to the opinionated
Grafana observability stack.

It is commonly used as a tracing pipeline, offloading traces from the
It's commonly used as a tracing pipeline, offloading traces from the
application and forwarding them to a storage backend.
The Grafana Agent tracing stack is built using OpenTelemetry.
Grafana Agent tracing stack is built using OpenTelemetry.

The Grafana Agent supports receiving traces in multiple formats:
Grafana Agent supports receiving traces in multiple formats:
OTLP (OpenTelemetry), Jaeger, Zipkin, and OpenCensus.

On top of receiving and exporting traces, the Grafana Agent contains many
On top of receiving and exporting traces, Grafana Agent contains many
features that make your distributed tracing system more robust, and
leverages all the data that is processed in the pipeline.
leverages all the data that's processed in the pipeline.

## Agent modes

Expand All @@ -32,7 +32,8 @@ Grafana Agent is available in two different variants:
* [Static mode](/docs/agent/latest/static): The original Grafana Agent.
* [Flow mode](/docs/agent/latest/flow): The new, component-based Grafana Agent.

Grafana Agent Flow configuration files are [written in River](/docs/agent/latest/flow/config-language/). Static configuraiton files are [written in YAML](/docs/agent/latest/static/configuration/).
Grafana Agent Flow configuration files are [written in River](/docs/agent/latest/flow/concepts/config-language/).
Static configuration files are [written in YAML](/docs/agent/latest/static/configuration/).
Examples in this document are for Flow mode.

For more information, refer to the [Introduction to Grafana Agent](/docs/agent/latest/about/).
Expand All @@ -41,7 +42,8 @@ For more information, refer to the [Introduction to Grafana Agent](/docs/agent/l

The Grafana Agent can be configured to run a set of tracing pipelines to collect data from your applications and write it to Tempo.
Pipelines are built using OpenTelemetry,
and consist of `receivers`, `processors` and `exporters`. The architecture mirrors that of the OTel Collector's [design](https://github.com/open-telemetry/opentelemetry-collector/blob/846b971758c92b833a9efaf742ec5b3e2fbd0c89/docs/design.md).
and consist of `receivers`, `processors`, and `exporters`.
The architecture mirrors that of the OTel Collector's [design](https://github.com/open-telemetry/opentelemetry-collector/blob/846b971758c92b833a9efaf742ec5b3e2fbd0c89/docs/design.md).
See the [configuration reference](/agent/latest/static/configuration/traces-config/) for all available configuration options.

<p align="center"><img src="https://raw.githubusercontent.com/open-telemetry/opentelemetry-collector/846b971758c92b833a9efaf742ec5b3e2fbd0c89/docs/images/design-pipelines.png" alt="Tracing pipeline architecture"></p>
Expand All @@ -51,12 +53,17 @@ pipelines, each of which collects separate spans and sends them to different
backends.

### Receiving traces

<!-- vale Grafana.Parentheses = NO -->
<!-- vale Grafana.Acronyms = NO -->
<!-- vale Grafana.Archives = NO -->
The Grafana Agent supports multiple ingestion receivers:
OTLP (OpenTelemetry), Jaeger, Zipkin, OpenCensus and Kafka.
OTLP (OpenTelemetry), Jaeger, Zipkin, OpenCensus, and Kafka.
<!-- vale Grafana.Archives = YES -->
<!-- vale Grafana.Acronyms = YES -->
<!-- vale Grafana.Parentheses = YES -->

Each tracing pipeline can be configured to receive traces in all these formats.
Traces that arrive to a pipeline will go through the receivers/processors/exporters defined in it.
Traces that arrive to a pipeline go through the receivers/processors/exporters defined in that pipeline.

### Pipeline processing

Expand All @@ -77,7 +84,7 @@ To configure it, refer to the `attributes` block in the [configuration reference
#### Attaching metadata with Prometheus Service Discovery

Prometheus Service Discovery mechanisms enable you to attach the same metadata to your traces as your metrics.
For example, for Kubernetes users this means that you can dynamically attach metadata for namespace, pod, and name of the container sending spans.
For example, for Kubernetes users this means that you can dynamically attach metadata for namespace, Pod, and name of the container sending spans.

```yaml
traces:
Expand All @@ -101,10 +108,10 @@ traces:
This feature isn’t just useful for Kubernetes users, however.
All of Prometheus' [various service discovery mechanisms](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file) are supported here.
This means you can use the same scrape_configs between your metrics, logs, and traces to get the same set of labels,
This means you can use the same `scrape_configs` between your metrics, logs, and traces to get the same set of labels,
and easily transition between your observability data when moving from your metrics, logs, and traces.

To configure it, refer to the `scrape_configs` block in the [configuration reference](/docs/agent/latest/configuration/traces-config).
Refer to the `scrape_configs` block in the [configuration reference](/docs/agent/latest/configuration/traces-config).

#### Trace discovery through automatic logging

Expand All @@ -119,7 +126,7 @@ With this feature, sampling decisions can be made based on data from a trace, ra

For a detailed description, go to [Tail-based sampling]({{< relref "./tail-based-sampling" >}}).

For additional information, refer to the blog post, [An introduction to trace sampling with Grafana Tempo and Grafana Agent](/blog/2022/05/11/an-introduction-to-trace-sampling-with-grafana-tempo-and-grafana-agent)
For additional information, refer to the blog post, [An introduction to trace sampling with Grafana Tempo and Grafana Agent](/blog/2022/05/11/an-introduction-to-trace-sampling-with-grafana-tempo-and-grafana-agent).

#### Generating metrics from spans

Expand All @@ -132,7 +139,7 @@ Go to [Span metrics]({{< relref "./span-metrics" >}}) for a more detailed explan
Service graph metrics represent the relationships between services within a distributed system.

This service graphs processor builds a map of services by analyzing traces, with the objective to find _edges_.
Edges are spans with a parent-child relationship, that represent a jump (e.g. a request) between two services.
Edges are spans with a parent-child relationship, that represent a jump, such as a request, between two services.
The amount of requests and their duration are recorded as metrics, which are used to represent the graph.

To read more about this processor, go to its [section]({{< relref "./service-graphs" >}}).
Expand Down
17 changes: 10 additions & 7 deletions docs/sources/tempo/setup/operator/monolithic.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,19 @@ aliases:

# Monolithic deployment

The `TempoMonolithic` Custom Resource (CR) creates a Tempo deployment in [Monolithic mode](https://grafana.com/docs/tempo/<TEMPO_VERSION>/setup/deployment/#monolithic-mode).
In this mode, all components of the Tempo deployment (compactor, distributor, ingester, querier and query-frontend) are contained in a single container.
The `TempoMonolithic` Custom Resource (CR) creates a Tempo deployment in [Monolithic mode]({{< relref "../../setup/deployment#monolithic-mode" >}}).
In this mode, a single container has all components of the Tempo deployment, including the compactor, distributor, ingester, querier, and query-frontend.

This type of deployment is ideal for small deployments, demo and test setups, and supports storing traces in memory, in a Persistent Volume and in object storage.
This type of deployment is ideal for small deployments, demo, and test setups, and supports storing traces in memory, in a Persistent Volume and in object storage.

{{< admonition type="note" >}}
The monolithic deployment of Tempo does not scale horizontally. If you require horizontal scaling, please use the `TempoStack` CR for a Tempo deployment in [Microservices mode](https://grafana.com/docs/tempo/<TEMPO_VERSION>/setup/deployment/#microservices-mode).
The monolithic deployment of Tempo doesn't scale horizontally.
If you require horizontal scaling, use the `TempoStack` CR for a Tempo deployment in [Microservices mode](https://grafana.com/docs/tempo/<TEMPO_VERSION>/setup/deployment/#microservices-mode).
{{< /admonition >}}

## Quickstart

The following manifest creates a Tempo monolithic deployment with trace ingestion over OTLP/gRPC and OTLP/HTTP, storing traces in a 2 GiB tmpfs volume (in-memory storage).
The following manifest creates a Tempo monolithic deployment with trace ingestion over OTLP/gRPC and OTLP/HTTP, storing traces in a 2 GiB `tmpfs` volume (in-memory storage).

```yaml
apiVersion: tempo.grafana.com/v1alpha1
Expand All @@ -33,11 +35,12 @@ spec:
size: 2Gi
```
Once the pod is ready, you can send traces to `tempo-sample:4317` (OTLP/gRPC) and `tempo-sample:4318` (OTLP/HTTP) inside the cluster.
After the Pod is ready, you can send traces to `tempo-sample:4317` (OTLP/gRPC) and `tempo-sample:4318` (OTLP/HTTP) inside the cluster.

To configure a Grafana data source, use the URL `http://tempo-sample:3200` (available inside the cluster).

## CRD Specification
## CRD specification

A manifest with all available configuration options is available here: [tempo.grafana.com_tempomonolithics.yaml](https://github.com/grafana/tempo-operator/blob/main/docs/spec/tempo.grafana.com_tempomonolithics.yaml).

{{< admonition type="note" >}}
Expand Down

0 comments on commit d94e1f2

Please sign in to comment.