Check syntax, analyze rules and test URL accessibility
Drop robots.txt here
or click to selectGet notified when new tools launch.
A robots.txt can be syntactically correct and still contain errors β incorrectly set Disallow rules that block important pages, missing Sitemap entries or AI bots that have unintended access. This validator analyzes the complete rule structure and shows concrete recommendations.
Disallow: / for Googlebot or other important crawlers is flagged as a critical error.A robots.txt is a text file in the root directory of a website (e.g. example.com/robots.txt). It tells search engine crawlers which areas of the website may or may not be crawled and indexed. The file uses the Robots Exclusion Protocol with directives like User-agent, Disallow, Allow and Sitemap.
Yes. OpenAI's GPTBot, Anthropic's ClaudeBot, Perplexity's PerplexityBot and Google's Google-Extended can all be blocked via robots.txt. Important: These bots respect robots.txt voluntarily β it does not replace legal protection.
Disallow: / mean?+Disallow: / blocks a crawler completely from the entire website. If set for User-agent: * or specifically for Googlebot, Google cannot crawl and index the website β a critical SEO error requiring immediate attention.
Disallow prevents crawling but does not necessarily prevent indexing. The noindex meta tag requires the page to be crawled first. For reliable non-indexing, combine both: allow crawling but set noindex.