Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why robots.txt? #55

Open
Zwyx opened this issue Jun 21, 2023 · 2 comments
Open

Why robots.txt? #55

Zwyx opened this issue Jun 21, 2023 · 2 comments

Comments

@Zwyx
Copy link

Zwyx commented Jun 21, 2023

Hi,

The docs mention that You must add a robots.txt file to allow search engines to crawl all your application pages.

Why is that?

A robots.txt file allowing everything seems to be unnecessary:

Do I have to include an allow rule to allow crawling?
No, you do not need to include an allow rule. All URLs are implicitly allowed and the allow rule is used to override disallow rules in the same robots.txt file.

Also:

Thank you!

@joshas
Copy link

joshas commented Jul 29, 2023

Probably a measure to keep PageSpeed Insights happy and allow you to get that perfect 100 in "SEO" category?

@Zwyx
Copy link
Author

Zwyx commented Jul 30, 2023

Thanks for your reply Joshas, but that's not the reason: I have 100 for SEO, without any robots.txt file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants