New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why robots.txt
?
#55
Comments
Probably a measure to keep PageSpeed Insights happy and allow you to get that perfect 100 in "SEO" category? |
Thanks for your reply Joshas, but that's not the reason: I have 100 for SEO, without any |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
The docs mention that You must add a robots.txt file to allow search engines to crawl all your application pages.
Why is that?
A robots.txt file allowing everything seems to be unnecessary:
Do I have to include an allow rule to allow crawling?
No, you do not need to include an allow rule. All URLs are implicitly allowed and the allow rule is used to override disallow rules in the same robots.txt file.
Also:
Thank you!
The text was updated successfully, but these errors were encountered: