Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add routes for time-to-first-byte stats #281

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft

Conversation

pyropy
Copy link
Contributor

@pyropy pyropy commented Dec 24, 2024

Adds routes for fetching global and per miner time to first byte stats.

Related to space-meridian/roadmap#208

@pyropy pyropy self-assigned this Dec 24, 2024
@@ -14,7 +14,9 @@ import {
fetchRetrievalSuccessRate,
fetchDealSummary,
fetchDailyRetrievalResultCodes,
fetchDailyMinerRSRSummary
fetchDailyMinerRSRSummary,
fetchDailyRetrievalTimes,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Retrieval times" isn't self explanatory - it could for example also be the timestamps at which retrievals have been attempted. What about this?

Suggested change
fetchDailyRetrievalTimes,
fetchDailyRetrievalTimings,

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See for example chrome dev tools also using "timings":

Screenshot 2025-01-05 at 22 43 59

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to use timings

const { rows } = await pgPools.evaluate.query(`
SELECT
day::text,
percentile_cont(0.5) WITHIN GROUP (ORDER BY time_to_first_byte_p50) AS ttfb_p50
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having variables time_to_first_byte_p50 and ttfb_p50 is confusing, as the names don't tell us how they differ. Can you think of a more semantic name for the computed column?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two aspects to consider:

  1. What name do we want to use in the SQL query?
  2. What property name do we want to use in the response body?

Also, note that a p50 value calculated from a list of p50 values is not a true global p50 value; it's just an approximation we decided to use to keep this initial implementation simple.

In that light, I propose to use the name ttfb in the response.

[
  { "day": "2025-01-08", "ttfb": 850 }
  { "day": "2025-01-09", "ttfb": 930 }
]

I think this name works well for the SQL query, too, but I don't mind using a different name there.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more idea to consider - should we encode the unit (milliseconds) in the property name?

[
  { "day": "2025-01-08", "ttfb_ms": 850 }
  { "day": "2025-01-09", "ttfb_ms": 930 }
]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"ttfb_ms" lgtm!

const { rows } = await pgPools.evaluate.query(`
SELECT
day::text,
percentile_cont(0.5) WITHIN GROUP (ORDER BY time_to_first_byte_p50) AS ttfb_p50
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

percentile_cont returns a double precision floating number, e.g. 20.124. This is likely to create a sense of false precision.

Let's convert the number to an integer, e.g. using the Postgres CEIL() function.

  • We could also use ROUND() (10.2→10, 10.8→11), but I prefer to be conservative in what we produce and prefer to report slightly worse TTFB values rather than slightly better values.

This is a minor detail, though - I am fine with any rounding function.

What's important to me is to produce integer values (number of milliseconds), since that's the precision we are getting from the checker ndoes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants