You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently it is difficult to make any decisions on how to change the data store since we don't fully understand how the metrics data is actually laid out.
For example, Sentry is not a typical example of a customer. How many customers are actually exceeding the bucket size that generates a cost savings (e.g. for quantiles, if a row has less than 8192 values in it, then we might as well store raw values).
It's also difficult to understand questions like "how many more rows does the 10 second granularity store vs. the 60 second granularity?"
The text was updated successfully, but these errors were encountered:
Currently it is difficult to make any decisions on how to change the data store since we don't fully understand how the metrics data is actually laid out.
For example, Sentry is not a typical example of a customer. How many customers are actually exceeding the bucket size that generates a cost savings (e.g. for quantiles, if a row has less than 8192 values in it, then we might as well store raw values).
It's also difficult to understand questions like "how many more rows does the 10 second granularity store vs. the 60 second granularity?"
The text was updated successfully, but these errors were encountered: