Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check robots.txt for sites that don't want to be indexed/crawled #19

Open
andrewshell opened this issue Feb 25, 2024 · 0 comments
Open

Comments

@andrewshell
Copy link
Owner

andrewshell commented Feb 25, 2024

Check robots.txt, and if the site disallows fedwikifeeds, don't check the sitemap or include it on the site.

Check the robots.txt once a week to update the status if things change.

Consider Crawl-Delay directive. https://websiteseochecker.com/blog/robots-txt-crawl-delay-why-we-use-crawl-delay-getting-started/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant