-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate missing Top 1k home pages #222
Comments
Just trying the first one ( When I try with So would guess it's just blocked. |
I looked into this for September crawl, and the number of missing pages increased to 20%. There are other reasons besides 403 response, like redirects:
The debug information in the staging dataset would help us see expected VS unexpected cases. @pmeenan do we log reasons for not collecting crawl data that we could JOIN here? |
Are those sites also available as their own pages? |
I've found https://www.clever.com/ in CrUX, but not the other one. And could we also run crawl in a headful browser? I believe it will fix big part of blocked pages. |
Well if popular enough page then I would expect it to be in CrUX. Weird that the pre-redirect one is in CrUX at all but maybe they just moved to www this month? Or it’s used for some other non-public reason (e.g. clever.com/intranet). We do have WPTS in our user agent header so we’re easy to block for people that don’t want crawlers/bots. We could remove that but would rather be a good net citizen and be honest about this. Another issue is that we only crawl from US data centres which can affect things. For example www.bbc.co.uk redirects to www.bbc.com for US visitors (which is in CrUX separately anyway). So not sure moving to a headed browser would fix most things that are blocking us. |
You're right, user agent is more obvious than headless signals. I'd still like to get a report for crawling 'failures' on a page level, so that we can have an overview of the discrepancies reasons instead of checking them one by one manually. |
FWIW, we crawl with a full, headful Chrome browser running in a XOrg virtual framebuffer. We could upload failed pages to a different table so we'd have the test results at least if that would help diagnose the issues or I could just log them somewhere along with the test IDs. Blocking visitors coming from Google Cloud isn't necessarily surprising since not many actual users will be browsing from a cloud provider. If we can find which CDN they are using we can see if that CDN classifies us appropriately. |
A table preferably. Plus requests data. And hope to be able to categorize and match the reasons:
|
In theory, the next crawl should write results for failed tests to tables in the The HARs and full test results will also be uploaded so we can look at the raw WPT tests as needed (in theory - that part of the pipeline doesn't have a good way to test until the crawl starts). |
Looks like the SELECT
JSON_VALUE(payload, '$._result') as result,
count(*) as num
FROM `httparchive.crawl_failures.pages`
WHERE
date = "2024-10-01" AND
rank = 1000
GROUP BY JSON_VALUE(payload, '$._result')
ORDER BY num DESC
Without the rank filter the main erros are similar but the ratios change a bit (and the long tail of error codes is long)
|
Seems aligned with the ranks (mobile here) But still 1M pages is missing somehow: WITH pages AS (
SELECT
page
FROM `all.pages`
WHERE date = '2024-10-01'
AND is_root_page
and client = 'mobile'
), fails AS (
SELECT
page
FROM crawl_failures.pages
WHERE date = '2024-10-01'
AND is_root_page
and client = 'mobile'
), crux AS (
SELECT
origin || "/" AS page
FROM `chrome-ux-report.experimental.global`
WHERE yyyymm = 202409
)
SELECT
crux.page
FROM crux
LEFT JOIN pages
ON crux.page = pages.page
LEFT JOIN fails
ON crux.page = fails.page
WHERE pages.page IS NULL AND fails.page IS NULL Examples from |
Possibly something is causing the failures to not get logged on the 3rd retry all the time, but spot-checking a few of those looks like they are mostly redirects to different origins as well. |
For some reason HA has no data for ~90 of the top 1k sites in CrUX:
This has been pretty consistent:
And here are the top 1k home pages that have consistently been missing all year (202301–202309):
Are the tests erroring out? Are they blocking us?
The text was updated successfully, but these errors were encountered: